The hybrid ARIMA-LSTM model is open to a variety of experimentation. For ideal performance, a balance must be reached between the levels of volatility that work best for the ARIMA and LSTM models. Using shorter MA periods that result in a non-mesokurtic distribution may achieve a better volatility balance between models.
import pandas as pd
pd.set_option('display.max_rows', 500)
import timeit
!pip install -q -U keras-tuner
|████████████████████████████████| 98 kB 2.0 MB/s
import keras_tuner as kt
!pip install pmdarima
Collecting pmdarima
Downloading pmdarima-1.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.4 MB)
|████████████████████████████████| 1.4 MB 5.1 MB/s
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.0)
Collecting statsmodels!=0.12.0,>=0.11
Downloading statsmodels-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)
|████████████████████████████████| 9.8 MB 25.3 MB/s
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.4.1)
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.5)
Requirement already satisfied: setuptools!=50.0.0,>=38.6.0 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (57.4.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.24.3)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.0.1)
Requirement already satisfied: Cython!=0.29.18,>=0.29 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.29.24)
Requirement already satisfied: numpy>=1.19.3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.19.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.19->pmdarima) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22->pmdarima) (3.0.0)
Requirement already satisfied: patsy>=0.5.2 in /usr/local/lib/python3.7/dist-packages (from statsmodels!=0.12.0,>=0.11->pmdarima) (0.5.2)
Installing collected packages: statsmodels, pmdarima
Attempting uninstall: statsmodels
Found existing installation: statsmodels 0.10.2
Uninstalling statsmodels-0.10.2:
Successfully uninstalled statsmodels-0.10.2
Successfully installed pmdarima-1.8.4 statsmodels-0.13.1
import pmdarima
url = 'https://launchpad.net/~mario-mariomedina/+archive/ubuntu/talib/+files'
!wget $url/libta-lib0_0.4.0-oneiric1_amd64.deb -qO libta.deb
!wget $url/ta-lib0-dev_0.4.0-oneiric1_amd64.deb -qO ta.deb
!dpkg -i libta.deb ta.deb
!pip install ta-lib
import talib
Selecting previously unselected package libta-lib0.
(Reading database ... 155222 files and directories currently installed.)
Preparing to unpack libta.deb ...
Unpacking libta-lib0 (0.4.0-oneiric1) ...
Selecting previously unselected package ta-lib0-dev.
Preparing to unpack ta.deb ...
Unpacking ta-lib0-dev (0.4.0-oneiric1) ...
Setting up libta-lib0 (0.4.0-oneiric1) ...
Setting up ta-lib0-dev (0.4.0-oneiric1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.3) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link
Collecting ta-lib
Downloading TA-Lib-0.4.22.tar.gz (268 kB)
|████████████████████████████████| 268 kB 5.0 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from ta-lib) (1.19.5)
Building wheels for collected packages: ta-lib
Building wheel for ta-lib (PEP 517) ... done
Created wheel for ta-lib: filename=TA_Lib-0.4.22-cp37-cp37m-linux_x86_64.whl size=1465642 sha256=97d5d24870f78eb9e3d82aaa24198e2edf91c684ef71eff65c674508b72e9de4
Stored in directory: /root/.cache/pip/wheels/7b/63/a9/144081748d9c4f0a09b4670c7b3c414bcb33ff97f0724c753a
Successfully built ta-lib
Installing collected packages: ta-lib
Successfully installed ta-lib-0.4.22
import tensorflow
import statsmodels.tsa.api
import keras
import sklearn
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Bidirectional,BatchNormalization, Embedding, TimeDistributed, LeakyReLU, GRU
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Activation, Dropout
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
from keras.callbacks import ModelCheckpoint,EarlyStopping
from keras.regularizers import l1_l2
import math
from statsmodels.tsa.api import VAR
from statsmodels.tsa.statespace.varmax import VARMAX,VARMAXResults
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from matplotlib import pyplot
import json
import datetime
import pandas as pd
import numpy as np
import os
from scipy.stats import kurtosis
import pmdarima as pm
from pmdarima import auto_arima
from talib import abstract
import json
import matplotlib.pyplot as plt
# plt.rcParams.update({'font.size': 16})
from matplotlib.pyplot import figure
from numpy import array
from numpy import hstack
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.utils.generic_utils import get_custom_objects
from tensorflow.keras.utils import plot_model
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warnings.simplefilter('ignore', ConvergenceWarning)
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
cd drive/MyDrive/Stock price prediction/Generated datasets
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Generated datasets
df = pd.read_csv("FULL_Data_google_COVID_bull_bear.csv",parse_dates=[0])
df.tail(10)
| Unnamed: 0 | Unnamed: 0.1 | Unnamed: 0.1.1 | Unnamed: 0.1.1.1 | Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1592 | 1592 | 1781 | 1781 | 1781 | 150.199997 | 151.429993 | 150.059998 | 150.809998 | 150.809998 | 56787900.0 | 150.565717 | 148.423811 | -1.137777 | 2.817933 | 154.059677 | 142.787944 | 150.767809 | 5.009368 | 93.428749 | -0.061228 | 100.779503 | -0.039111 | 103.599003 | -0.022436 | 2021-11-09 | 19 | 112313 | 1258 | 0.119141 | 0.111328 | NaN | NaN | NaN | NaN |
| 1593 | 1593 | 1782 | 1782 | 1782 | 150.020004 | 150.130005 | 147.850006 | 147.919998 | 147.919998 | 65187100.0 | 150.417145 | 148.729049 | -1.236913 | 2.144358 | 153.017766 | 144.440332 | 148.869268 | 4.989888 | 92.922909 | -0.061683 | 99.694365 | -0.039762 | 101.872301 | -0.022657 | 2021-11-10 | 19 | 80301 | 1470 | 0.154297 | 0.109375 | NaN | NaN | NaN | NaN |
| 1594 | 1594 | 1783 | 1783 | 1783 | 148.960007 | 149.429993 | 147.679993 | 147.869995 | 147.869995 | 41000000.0 | 150.110001 | 149.060477 | -1.165047 | 1.767475 | 152.595428 | 145.525526 | 148.203086 | 4.989548 | 92.416471 | -0.062129 | 98.604584 | -0.040391 | 100.137594 | -0.022839 | 2021-11-11 | 19 | 94975 | 1662 | 0.102845 | 0.126915 | NaN | NaN | NaN | NaN |
| 1595 | 1595 | 1784 | 1784 | 1784 | 148.429993 | 150.399994 | 147.479996 | 149.990005 | 149.990005 | 63632600.0 | 149.895715 | 149.357144 | -0.869308 | 1.420732 | 152.198608 | 146.515681 | 149.394365 | 5.003879 | 91.909483 | -0.062566 | 97.510555 | -0.040998 | 98.396260 | -0.022980 | 2021-11-12 | 19 | 55499 | 797 | 0.157277 | 0.080595 | NaN | NaN | NaN | NaN |
| 1596 | 1596 | 1785 | 1785 | 1785 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2021-11-13 | 19 | 146529 | 2505 | 0.139459 | 0.083243 | NaN | NaN | NaN | NaN |
| 1597 | 1597 | 1786 | 1786 | 1786 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2021-11-14 | 19 | 40964 | 479 | 0.151261 | 0.100840 | NaN | NaN | NaN | NaN |
| 1598 | 1598 | 1787 | 1787 | 1787 | 150.369995 | 151.880005 | 149.429993 | 150.000000 | 150.000000 | 59222800.0 | 149.758571 | 149.602859 | -0.907641 | 1.229694 | 152.062246 | 147.143471 | 149.798122 | 5.003946 | 91.401994 | -0.062993 | 96.412672 | -0.041581 | 96.649685 | -0.023077 | 2021-11-15 | 22 | 30290 | 148 | 0.136737 | 0.109389 | NaN | NaN | NaN | NaN |
| 1599 | 1599 | 1788 | 1788 | 1788 | 149.940002 | 151.490005 | 149.339996 | 151.000000 | 151.000000 | 59256200.0 | 149.718571 | 149.814763 | -0.791320 | 1.236243 | 152.287250 | 147.342277 | 150.599374 | 5.010635 | 90.894052 | -0.063410 | 95.311334 | -0.042140 | 94.899260 | -0.023130 | 2021-11-16 | 22 | 138962 | 1294 | 0.135531 | 0.115385 | NaN | NaN | NaN | NaN |
| 1600 | 1600 | 1789 | 1789 | 1789 | 151.000000 | 155.000000 | 150.990005 | 153.490005 | 153.490005 | 88807000.0 | 150.154286 | 150.040002 | -0.657719 | 1.467121 | 152.974245 | 147.105759 | 152.526461 | 5.027099 | 90.385704 | -0.063817 | 94.206941 | -0.042673 | 93.146378 | -0.023135 | 2021-11-17 | 22 | 87626 | 1290 | 0.100870 | 0.126957 | NaN | NaN | NaN | NaN |
| 1601 | 1601 | 1790 | 1790 | 1790 | 153.710007 | 158.669998 | 153.050003 | 157.869995 | 157.869995 | 137659100.0 | 151.162857 | 150.450002 | -0.609656 | 2.267825 | 154.985653 | 145.914351 | 156.088817 | 5.055417 | 89.877000 | -0.064214 | 93.099895 | -0.043179 | 91.392433 | -0.023090 | 2021-11-18 | 22 | 111404 | 1637 | 0.145098 | 0.121569 | NaN | NaN | NaN | NaN |
cd ..
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction
cd Archana - LSTM Hybrid/Outputs/CovidShare
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/Covid
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(5)
0 Saturday 1 Sunday 3 Tuesday 7 Saturday 8 Sunday Name: Date, dtype: object
len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
497
len(df)
1602
len(df) - len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
1105
df.dropna(inplace=True)
len(df)
1080
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(3)
Series([], Name: Date, dtype: object)
df.head(5)
| Unnamed: 0 | Unnamed: 0.1 | Unnamed: 0.1.1 | Unnamed: 0.1.1.1 | Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2 | 2 | 191 | 191 | 191 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 2017-07-03 | 15 | 0 | 0 | 0.666667 | 0.000000 | 0.142778 | 0.146810 | 0.100537 | 0.099251 |
| 4 | 4 | 193 | 193 | 193 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 2017-07-05 | 15 | 0 | 0 | 0.400000 | 0.000000 | 0.144487 | 0.145833 | 0.100630 | 0.096361 |
| 5 | 5 | 194 | 194 | 194 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 2017-07-06 | 15 | 0 | 0 | 0.142857 | 0.142857 | 0.145346 | 0.145164 | 0.100672 | 0.094761 |
| 6 | 6 | 195 | 195 | 195 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 2017-07-07 | 15 | 0 | 0 | 0.333333 | 0.000000 | 0.146208 | 0.144377 | 0.100711 | 0.093072 |
| 9 | 9 | 198 | 198 | 198 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 2017-07-10 | 14 | 0 | 0 | 0.000000 | 0.000000 | 0.148802 | 0.141354 | 0.100808 | 0.087587 |
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)]
dataset_final = df[stock_col]
dataset_final.head(5)
| Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 2017-07-03 | 15 | 0 | 0 | 0.666667 | 0.000000 | 0.142778 | 0.146810 | 0.100537 | 0.099251 |
| 4 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 2017-07-05 | 15 | 0 | 0 | 0.400000 | 0.000000 | 0.144487 | 0.145833 | 0.100630 | 0.096361 |
| 5 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 2017-07-06 | 15 | 0 | 0 | 0.142857 | 0.142857 | 0.145346 | 0.145164 | 0.100672 | 0.094761 |
| 6 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 2017-07-07 | 15 | 0 | 0 | 0.333333 | 0.000000 | 0.146208 | 0.144377 | 0.100711 | 0.093072 |
| 9 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 2017-07-10 | 14 | 0 | 0 | 0.000000 | 0.000000 | 0.148802 | 0.141354 | 0.100808 | 0.087587 |
stock_col= list(df.columns)
stock_col1 = stock_col[4:len(stock_col)-9]
stock_col2 = stock_col[len(stock_col)-8:len(stock_col)-6]
stock_col1.append(stock_col2[0])
stock_col1.append(stock_col2[1])
dataset_final = df[stock_col1]
dataset_final.head(5)
| Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | COVID positiveIncrease | COVID deathIncrease | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 2017-07-03 | 0 | 0 |
| 4 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 2017-07-05 | 0 | 0 |
| 5 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 2017-07-06 | 0 | 0 |
| 6 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 2017-07-07 | 0 | 0 |
| 9 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 2017-07-10 | 0 | 0 |
# Set the date to datetime data
datetime_series = pd.to_datetime(dataset_final['Date'])
datetime_index = pd.DatetimeIndex(datetime_series.values)
dataset_final = dataset_final.set_index(datetime_index)
dataset_final = dataset_final.sort_values(by='Date')
dataset_final = dataset_final.drop(columns='Date')
dataset_final.head(5)
| Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | COVID positiveIncrease | COVID deathIncrease | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2017-07-03 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 0 | 0 |
| 2017-07-05 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 0 | 0 |
| 2017-07-06 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 0 | 0 |
| 2017-07-07 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 0 | 0 |
| 2017-07-10 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 0 | 0 |
# Get features and target
X_value = pd.DataFrame(dataset_final.iloc[:, :])
y_value = pd.DataFrame(dataset_final.iloc[:, 3])
y_value.head(5)
| Close | |
|---|---|
| 2017-07-03 | 35.875000 |
| 2017-07-05 | 36.022499 |
| 2017-07-06 | 35.682499 |
| 2017-07-07 | 36.044998 |
| 2017-07-10 | 36.264999 |
# Normalized the data
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
MinMaxScaler(feature_range=(-1, 1))
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
X_scale_dataset.shape, y_scale_dataset.shape,
((1080, 22), (1080, 1))
X_value.shape[1]
22
n_steps_in = 3
n_features = X_value.shape[1] #19 features
n_steps_out = 1
# Reshape the data
'''Set the data input steps and output steps,
we use 30 days data to predict 1 day price here,
reshape it to (None, input_step, number of features) used for LSTM input'''
# Get X/y dataset
def get_X_y(X_data, y_data):
X = list()
y = list()
yc = list()
length = len(X_data)
for i in range(0, length, 1):
# pdb.set_trace()
X_value = X_data[i: i + n_steps_in][:, :]
# print('[',i,': ',i,' + ',n_steps_in,'][:, :]')
y_value = y_data[i + n_steps_in: i + (n_steps_in + n_steps_out)][:, 0]
# print('[',i,' + ',n_steps_in,': ',i,' + (',n_steps_in,' + ',n_steps_out,')][:, 0]')
yc_value = y_data[i: i + n_steps_in][:, :]
if len(X_value) == 3 and len(y_value) == 1:
X.append(X_value)
y.append(y_value)
yc.append(yc_value)
return np.array(X), np.array(y), np.array(yc)
# get the train test predict index
def predict_index(dataset, X_train, n_steps_in, n_steps_out):
# get the predict data (remove the in_steps days)
train_predict_index = dataset.iloc[n_steps_in : X_train.shape[0] + n_steps_in + n_steps_out - 1, :].index
test_predict_index = dataset.iloc[X_train.shape[0] + n_steps_in:, :].index
return train_predict_index, test_predict_index
def mean_absolute_percentage_error(actual, prediction):
actual = pd.Series(actual)
prediction = pd.Series(prediction)
return 100 * np.mean(np.abs((actual - prediction))/actual)
# Split train/test dataset
def split_train_test(data):
train_size = round(len(X) * 0.75)
data_train = data[0:train_size]
data_test = data[train_size:]
return data_train, data_test
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
yc_train, yc_test, = split_train_test(yc)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# %% --------------------------------------- Save dataset -----------------------------------------------------------------
print('X shape: ', X.shape)
print('y shape: ', y.shape)
print('X_train shape: ', X_train.shape)
print('y_train shape: ', y_train.shape)
print('y_c_train shape: ', yc_train.shape)
print('X_test shape: ', X_test.shape)
print('y_test shape: ', y_test.shape)
print('y_c_test shape: ', yc_test.shape)
print('index_train shape:', index_train.shape)
print('index_test shape:', index_test.shape)
X shape: (1077, 3, 22) y shape: (1077, 1) X_train shape: (808, 3, 22) y_train shape: (808, 1) y_c_train shape: (808, 3, 1) X_test shape: (269, 3, 22) y_test shape: (269, 1) y_c_test shape: (269, 3, 1) index_train shape: (808,) index_test shape: (269,)
output_dim = y_train.shape[1]
output_dim
1
df = dataset_final.copy()
df.rename(columns={'Date':'date','Open':'open','Low':'low','Close':'close','Volume':'volume','High':'high'}, inplace = True)
df.reset_index(drop=True,inplace=True)
df.head(1)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | COVID positiveIncrease | COVID deathIncrease | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 36.220001 | 36.325001 | 35.775002 | 35.875 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.96052 | 38.672945 | 34.830864 | 35.924548 | 3.55177 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 0 | 0 |
# df.drop(['volume', 'MACD','20SD','logmomentum','absolute of 3 comp','angle of 3 comp','absolute of 6 comp','angle of 6 comp','absolute of 9 comp','angle of 9 comp'], axis='columns', inplace=True) # only keep columns that can help as residuals in Arima Hybrid
df.head(1)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | COVID positiveIncrease | COVID deathIncrease | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 36.220001 | 36.325001 | 35.775002 | 35.875 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.96052 | 38.672945 | 34.830864 | 35.924548 | 3.55177 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 0 | 0 |
test_len = len(X_test)
train_len = len(X_train )
test_len, train_len
(269, 808)
# Initialize moving averages from Ta-Lib, store functions in dictionary
# talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'MIDPRICE', 'T3', 'TEMA', 'TRIMA'] remove midprice due to outputbeing univariate
talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA', 'TRIMA']
functions = {}
for ma in talib_moving_averages:
functions[ma] = abstract.Function(ma)
# Determine kurtosis "K" values for MA period 4-30
kurtosis_results = {'period': []}
for i in range(4, 100): # 100
kurtosis_results['period'].append(i)
for ma in talib_moving_averages:
# Run moving average, remove last N days (used later for test data set), trim MA result to last 30 days
ma_output = functions[ma](df[:-test_len], i).tail(14)
# Determine kurtosis "K" value
k = kurtosis(ma_output, fisher=False)
# add to dictionary
if ma not in kurtosis_results.keys():
kurtosis_results[ma] = []
kurtosis_results[ma].append(k)
kurtosis_results = pd.DataFrame(kurtosis_results)
kurtosis_results.to_csv('kurtosis_results.csv')
kurtosis_results.head(5)
| period | SMA | EMA | WMA | DEMA | KAMA | MIDPOINT | T3 | TEMA | TRIMA | |
|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 4 | 2.272452 | 2.652772 | 2.896972 | 3.800351 | 2.299585 | 2.171369 | 1.978458 | 4.609342 | 2.411225 |
| 1 | 5 | 1.839451 | 2.355815 | 2.481058 | 3.327525 | 1.841282 | 1.826597 | 1.640277 | 4.262302 | 1.994382 |
| 2 | 6 | 1.583886 | 2.159532 | 2.194320 | 2.945924 | 1.536136 | 1.605787 | 1.510972 | 3.878845 | 1.679710 |
| 3 | 7 | 1.461290 | 2.026758 | 1.990629 | 2.651927 | 1.506197 | 1.558096 | 1.514015 | 3.510432 | 1.486348 |
| 4 | 8 | 1.447516 | 1.935302 | 1.853935 | 2.429648 | 1.509566 | 1.621595 | 1.601580 | 3.184123 | 1.373337 |
# Determine period with K closest to 3 +/-5%
optimized_period = {}
# https://pypi.org/project/TA-Lib/ determines the type of moving average to use
# https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.at.html#pandas.DataFrame.at
for ma in talib_moving_averages:
difference = np.abs(kurtosis_results[ma] - 3)
df_arimahyb = pd.DataFrame({'difference': difference, 'period': kurtosis_results['period']})
df_arimahyb = df_arimahyb.sort_values(by=['difference'], ascending=True).reset_index(drop=True)
if df_arimahyb.at[0, 'difference'] < 3 * 0.05:
optimized_period[ma] = df_arimahyb.at[0, 'period']
else:
print(ma + ' is not viable, best K greater or less than 3 +/-5%')
print('\nOptimized periods:', optimized_period)
TRIMA is not viable, best K greater or less than 3 +/-5%
Optimized periods: {'SMA': 17, 'EMA': 51, 'WMA': 49, 'DEMA': 89, 'KAMA': 18, 'MIDPOINT': 14, 'T3': 19, 'TEMA': 9}
optimized_period
{'DEMA': 89,
'EMA': 51,
'KAMA': 18,
'MIDPOINT': 14,
'SMA': 17,
'T3': 19,
'TEMA': 9,
'WMA': 49}
simulation = {}
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma in ['EMA','WMA','DEMA','KAMA','MIDPOINT']:
# print(ma)
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
low_vol.tail(20)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | COVID positiveIncrease | COVID deathIncrease | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1060 | 140.200839 | 141.942909 | 138.524500 | 140.171495 | 139.966842 | 8.852448e+07 | 142.165478 | 146.699207 | 1.815578 | 4.572948 | 155.845103 | 137.553312 | 140.365562 | 4.935800 | 105.739092 | -0.047411 | 125.318767 | -0.018291 | 140.471430 | -0.008749 | 109077.158389 | 1731.777447 |
| 1061 | 139.425914 | 141.705469 | 138.035200 | 140.698014 | 140.492650 | 8.620711e+07 | 141.528981 | 145.978836 | 2.115887 | 4.189393 | 154.357621 | 137.600050 | 140.587196 | 4.939545 | 105.263514 | -0.048037 | 124.464999 | -0.019222 | 139.335869 | -0.009472 | 102841.290470 | 1754.266015 |
| 1062 | 140.773058 | 142.636405 | 139.932338 | 141.733666 | 141.526843 | 7.421445e+07 | 141.294887 | 145.298477 | 2.211018 | 3.647690 | 152.593858 | 138.003097 | 141.351509 | 4.946870 | 104.786174 | -0.048658 | 123.598217 | -0.020150 | 138.164839 | -0.010188 | 105557.460315 | 2130.717759 |
| 1063 | 142.179695 | 143.266994 | 141.127848 | 142.249061 | 142.041527 | 6.519616e+07 | 141.224295 | 144.665584 | 2.093072 | 3.241276 | 151.148137 | 138.183031 | 141.949877 | 4.950518 | 104.307114 | -0.049275 | 122.718682 | -0.021074 | 136.959041 | -0.010898 | 102639.580150 | 2301.041924 |
| 1064 | 142.253947 | 144.008334 | 141.546689 | 142.555532 | 142.347589 | 6.254214e+07 | 141.336839 | 144.184381 | 1.988881 | 2.884864 | 149.954110 | 138.414652 | 142.353647 | 4.952685 | 103.826381 | -0.049886 | 121.826667 | -0.021994 | 135.719217 | -0.011600 | 69581.281276 | 1416.274721 |
| 1065 | 142.782738 | 143.732491 | 141.438660 | 142.125353 | 141.918068 | 6.542511e+07 | 141.385297 | 143.758659 | 1.774804 | 2.626682 | 149.012024 | 138.505294 | 142.201451 | 4.949632 | 103.344020 | -0.050491 | 120.922446 | -0.022909 | 134.446150 | -0.012293 | 82284.124854 | 1213.050329 |
| 1066 | 142.153085 | 142.656915 | 140.466684 | 141.564232 | 141.357788 | 7.040262e+07 | 141.585336 | 143.387397 | 1.634667 | 2.376817 | 148.141030 | 138.633764 | 141.776638 | 4.945637 | 102.860075 | -0.051092 | 120.006305 | -0.023818 | 133.140665 | -0.012977 | 91985.728638 | 1671.446790 |
| 1067 | 142.177201 | 143.194327 | 140.977156 | 142.610382 | 142.402435 | 6.948112e+07 | 141.933749 | 143.094536 | 1.573317 | 2.074153 | 147.242842 | 138.946230 | 142.332468 | 4.953023 | 102.374593 | -0.051687 | 119.078535 | -0.024722 | 131.803627 | -0.013650 | 105946.749025 | 2401.564322 |
| 1068 | 143.009006 | 144.052615 | 142.286776 | 143.812497 | 143.602819 | 6.805244e+07 | 142.378675 | 142.879716 | 1.473333 | 1.874158 | 146.628032 | 139.131400 | 143.319154 | 4.961467 | 101.887619 | -0.052275 | 118.139433 | -0.025618 | 130.435938 | -0.014311 | 97749.622599 | 2337.714304 |
| 1069 | 143.380322 | 145.547752 | 142.940349 | 145.397429 | 145.185452 | 7.592729e+07 | 142.902069 | 142.813890 | 1.447641 | 1.844159 | 146.502207 | 139.125573 | 144.704671 | 4.972505 | 101.399198 | -0.052858 | 117.189304 | -0.026508 | 129.038540 | -0.014959 | 65512.021173 | 1450.883588 |
| 1070 | 145.337970 | 147.615882 | 144.980528 | 147.444584 | 147.229635 | 7.653090e+07 | 143.644287 | 142.961273 | 1.284466 | 2.010227 | 146.981728 | 138.940819 | 146.531280 | 4.986604 | 100.909377 | -0.053435 | 116.228458 | -0.027389 | 127.612408 | -0.015592 | 81208.790926 | 1562.998080 |
| 1071 | 147.375283 | 149.163050 | 146.995423 | 148.921380 | 148.704294 | 6.811986e+07 | 144.553694 | 143.236380 | 0.961952 | 2.270386 | 147.777152 | 138.695607 | 148.124680 | 4.996737 | 100.418203 | -0.054006 | 115.257214 | -0.028261 | 126.158555 | -0.016211 | 80097.040341 | 1799.104627 |
| 1072 | 148.656821 | 150.010875 | 148.071943 | 149.870634 | 149.652170 | 6.425222e+07 | 145.660163 | 143.530869 | 0.589081 | 2.556352 | 148.643574 | 138.418164 | 149.288649 | 5.003230 | 99.925720 | -0.054570 | 114.275894 | -0.029124 | 124.678027 | -0.016812 | 84773.597081 | 2474.891188 |
| 1073 | 149.806550 | 150.715254 | 149.026204 | 149.977942 | 149.759331 | 6.069918e+07 | 146.862121 | 143.785380 | 0.135134 | 2.805932 | 149.397244 | 138.173516 | 149.748178 | 5.003989 | 99.431976 | -0.055128 | 113.284828 | -0.029977 | 123.171903 | -0.017396 | 81822.656493 | 2309.512985 |
| 1074 | 149.937482 | 150.666013 | 149.022091 | 149.911667 | 149.693162 | 5.465321e+07 | 147.905162 | 144.001463 | -0.245163 | 3.045742 | 150.092948 | 137.909978 | 149.857170 | 5.003545 | 98.937018 | -0.055679 | 112.284350 | -0.030820 | 121.641290 | -0.017961 | 50071.065843 | 1328.997652 |
| 1075 | 150.228161 | 151.254072 | 149.586503 | 150.104281 | 149.885502 | 5.602702e+07 | 148.803988 | 144.237215 | -0.571069 | 3.270011 | 150.777237 | 137.697192 | 150.021910 | 5.004835 | 98.440892 | -0.056223 | 111.274800 | -0.031650 | 120.087330 | -0.018506 | 78497.995262 | 1318.889721 |
| 1076 | 150.328251 | 150.997797 | 149.591175 | 149.912656 | 149.694163 | 5.484778e+07 | 149.449021 | 144.548659 | -0.850904 | 3.458615 | 151.465890 | 137.631428 | 149.949074 | 5.003520 | 97.943645 | -0.056759 | 110.256524 | -0.032469 | 118.511190 | -0.019029 | 73847.766335 | 1393.610487 |
| 1077 | 150.525566 | 152.430694 | 150.099878 | 151.531571 | 151.310718 | 7.580033e+07 | 150.032876 | 144.967153 | -0.975625 | 3.719924 | 152.407001 | 137.527305 | 151.004072 | 5.014296 | 97.445324 | -0.057289 | 109.229873 | -0.033274 | 116.914063 | -0.019528 | 86701.762012 | 1676.051703 |
| 1078 | 149.301052 | 151.688142 | 148.723104 | 151.137179 | 150.916905 | 1.012990e+08 | 150.349418 | 145.413317 | -0.891585 | 3.905336 | 153.223988 | 137.602646 | 151.092810 | 5.011652 | 96.945977 | -0.057811 | 108.195203 | -0.034066 | 115.297171 | -0.020004 | 83539.187040 | 1748.759489 |
| 1079 | 149.321425 | 151.018197 | 148.455004 | 150.396057 | 150.176865 | 9.262134e+07 | 150.424479 | 145.823313 | -0.852689 | 3.878291 | 153.579894 | 138.066731 | 150.628308 | 5.006660 | 96.445650 | -0.058325 | 107.152874 | -0.034844 | 113.661756 | -0.020453 | 58706.570197 | 1004.292073 |
high_vol.head(10)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | COVID positiveIncrease | COVID deathIncrease | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 0.0 | 0.0 |
| 1 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 0.0 | 0.0 |
| 2 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 0.0 | 0.0 |
| 3 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 0.0 | 0.0 |
| 4 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 0.0 | 0.0 |
| 5 | 36.182499 | 36.462502 | 36.095001 | 36.382500 | 34.536625 | 79127200.0 | 36.039642 | 36.202738 | 0.372153 | 0.308860 | 36.820458 | 35.585018 | 36.309257 | 3.566217 | 37.412947 | 0.059392 | 31.005161 | 0.084416 | 43.829622 | -0.052901 | 0.0 | 0.0 |
| 6 | 36.467499 | 36.544998 | 36.205002 | 36.435001 | 34.586472 | 99538000.0 | 36.101071 | 36.206547 | 0.317572 | 0.295861 | 36.798268 | 35.614826 | 36.393086 | 3.567700 | 37.215939 | 0.061899 | 31.279154 | 0.080632 | 43.892360 | -0.052406 | 0.0 | 0.0 |
| 7 | 36.375000 | 37.122501 | 36.360001 | 36.942501 | 35.068211 | 100797600.0 | 36.253571 | 36.220595 | 0.322643 | 0.340687 | 36.901969 | 35.539221 | 36.759363 | 3.581920 | 37.022928 | 0.064410 | 31.557136 | 0.076830 | 43.941338 | -0.051818 | 0.0 | 0.0 |
| 8 | 36.992500 | 37.332500 | 36.832500 | 37.259998 | 35.369610 | 80528400.0 | 36.430357 | 36.266785 | 0.257925 | 0.410484 | 37.087753 | 35.445818 | 37.093120 | 3.590715 | 36.833908 | 0.066926 | 31.838833 | 0.073014 | 43.976744 | -0.051137 | 0.0 | 0.0 |
| 9 | 37.205002 | 37.724998 | 37.142502 | 37.389999 | 35.493000 | 95174000.0 | 36.674285 | 36.329523 | 0.184267 | 0.445597 | 37.220717 | 35.438330 | 37.291039 | 3.594294 | 36.648875 | 0.069445 | 32.123972 | 0.069192 | 43.998789 | -0.050365 | 0.0 | 0.0 |
def get_arima(dataframe,original_data, train_len, test_len):
# prepare train and test data
X_value = pd.DataFrame(dataframe.iloc[:, :])
y_value = pd.DataFrame(dataframe.iloc[:, 3])
X_train, X_test = split_train_test(X_value)
y_train, y_test = split_train_test(y_value)
yc_train,yc_test = split_train_test(original_data)
# y_train_ = y_train['close'].to_list()
# y_test_ = y_test['close'].to_list()
yc = yc_test.values.tolist()
y_train_list = y_train['close'].values.tolist()
y_test_list = y_test['close'].values.tolist()
# Initialize model
model = auto_arima(y_train_list,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
print(model.summary())
# Determine model parameters
model.fit(y_train_list,disp= 0)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list,disp= 0)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
# Generate error data
mse = mean_squared_error(yc_test, prediction)
rmse = mse ** 0.5
# mape = mean_absolute_percentage_error(pd.Series(yc_test).values.tolist(), pd.Series(predictionte).values.tolist() )
mae = mean_absolute_error(pd.Series(yc_test).values.tolist() , pd.Series(prediction).values.tolist() )
return yc, prediction, mse, rmse, mae
def plot_train(simulation,SIM):
train_predict_index = np.load("index_train_appl.npy", allow_pickle=True)#Dates for train data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['final_tr']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['final_tr']['prediction'][i], columns=["predicted_price"],
index=train_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
#This is a dataframe with each column containing the predicted daily closing price
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['final_tr']['original'])):
y_train = pd.DataFrame(simulation[SIM]['final_tr']['original'][i], columns=["real_price"],
index=train_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False) #This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Training for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
sf = fileimg+'_'+SIM+'Train Hybrid Arima LSTM Pred Out.png'
plt.savefig(sf,dpi='figure')
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"----- Train RMSE for {SIM} -----", RMSE)
print(f"----- Train_MSE_LSTM for {SIM} -----", MSE)
print(f"----- Train MAE LSTM for {SIM} -----", MAE)
def plot_test(simulation, SIM):
test_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data
# rescaled_real_y = y_scaler.inverse_transform(y_train)#Real closing price data
# rescaled_predicted_y = y_scaler.inverse_transform(train_yhat)#Predicted closing price data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['final']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['final']['prediction'][i], columns=["predicted_price"],
index=test_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)#This is a dataframe with each column containing the predicted daily closing price
#
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['final']['original'])):
y_train = pd.DataFrame(simulation[SIM]['final']['original'][i], columns=["real_price"],
index=test_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False)#This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Testing for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
sf = fileimg+'_'+SIM+'Test Hybrid Arima LSTM Pred Out.png'
plt.savefig(sf,dpi='figure')
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"----- Test RMSE for {SIM}-----", RMSE)
print(f"----- Test_MSE_LSTM for {SIM}-----", MSE)
print(f"----- Test_MAE_LSTM for {SIM}-----", MAE)
def plot_train_high(simulation, SIM):
train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['high_vol']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['high_vol']['prediction'][i], columns=["predicted_price"],
index=train_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
#This is a dataframe with each column containing the predicted daily closing price
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['high_vol']['original'])):
y_train = pd.DataFrame(simulation[SIM]['high_vol']['original'][i], columns=["real_price"],
index=train_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False) #This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Training for {SIM}", fontsize=20)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"----- Individual LSTM RMSE for {SIM} -----", RMSE)
print(f"----- Individual LSTM_MSE_LSTM for {SIM} -----", MSE)
print(f"----- Individual LSTM MAE LSTM for {SIM} -----", MAE)
def plot_train_low(simulation , SIM):
train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['low_vol']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['low_vol']['prediction'][i], columns=["predicted_price"],
index=train_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
#This is a dataframe with each column containing the predicted daily closing price
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['low_vol']['original'])):
y_train = pd.DataFrame(simulation[SIM]['low_vol']['original'][i], columns=["real_price"],
index=train_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False) #This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Training for {SIM}", fontsize=20)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"-----Arima RMSE for {SIM} -----", RMSE)
print(f"----- Arima MSE for {SIM} -----", MSE)
print(f"----- Arima MAE for {SIM} -----", MAE)
import os
os.getcwd()
'/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/Covid'
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
model.add(Dense(units=64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
## Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
# cts().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation1 = {}
imgfile = 'Experiment1'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation1[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation1_data.json', 'w') as fp:
json.dump(simulation1, fp)
for ma in simulation1.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation1[ma]['final']['mse'],
'\nRMSE:\t', simulation1[ma]['final']['rmse'],
'\nMAPE:\t', simulation1[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
# code you want to evaluate
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.48 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.15 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.53 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.58 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.15 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.098 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 13:04:15 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.00968, saving model to LSTM1.h5 48/48 - 4s - loss: 0.2459 - val_loss: 0.0097 - lr: 0.0010 - 4s/epoch - 76ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.1711 - val_loss: 0.0231 - lr: 0.0010 - 512ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0984 - val_loss: 0.8944 - lr: 0.0010 - 524ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0525 - val_loss: 0.2835 - lr: 0.0010 - 522ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0453 - val_loss: 0.1463 - lr: 0.0010 - 537ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0397 - val_loss: 0.2374 - lr: 0.0010 - 540ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0449 - val_loss: 0.2185 - lr: 1.0000e-04 - 514ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0362 - val_loss: 0.2081 - lr: 1.0000e-04 - 506ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0368 - val_loss: 0.1951 - lr: 1.0000e-04 - 521ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0323 - val_loss: 0.1830 - lr: 1.0000e-04 - 538ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.00968 48/48 - 0s - loss: 0.0316 - val_loss: 0.1684 - lr: 1.0000e-04 - 484ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0363 - val_loss: 0.1677 - lr: 1.0000e-05 - 533ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0316 - val_loss: 0.1674 - lr: 1.0000e-05 - 518ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0410 - val_loss: 0.1672 - lr: 1.0000e-05 - 521ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0348 - val_loss: 0.1674 - lr: 1.0000e-05 - 537ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0348 - val_loss: 0.1667 - lr: 1.0000e-05 - 537ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0361 - val_loss: 0.1659 - lr: 1.0000e-05 - 506ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0313 - val_loss: 0.1648 - lr: 1.0000e-05 - 535ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0334 - val_loss: 0.1638 - lr: 1.0000e-05 - 532ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00968 48/48 - 0s - loss: 0.0317 - val_loss: 0.1637 - lr: 1.0000e-05 - 489ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00968 48/48 - 0s - loss: 0.0335 - val_loss: 0.1625 - lr: 1.0000e-05 - 461ms/epoch - 10ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0314 - val_loss: 0.1615 - lr: 1.0000e-05 - 522ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0340 - val_loss: 0.1611 - lr: 1.0000e-05 - 519ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0310 - val_loss: 0.1605 - lr: 1.0000e-05 - 541ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00968 48/48 - 0s - loss: 0.0308 - val_loss: 0.1598 - lr: 1.0000e-05 - 492ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0314 - val_loss: 0.1590 - lr: 1.0000e-05 - 520ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0334 - val_loss: 0.1581 - lr: 1.0000e-05 - 528ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0334 - val_loss: 0.1573 - lr: 1.0000e-05 - 532ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0334 - val_loss: 0.1563 - lr: 1.0000e-05 - 559ms/epoch - 12ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0348 - val_loss: 0.1552 - lr: 1.0000e-05 - 520ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0293 - val_loss: 0.1544 - lr: 1.0000e-05 - 510ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0334 - val_loss: 0.1538 - lr: 1.0000e-05 - 537ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0327 - val_loss: 0.1528 - lr: 1.0000e-05 - 536ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0316 - val_loss: 0.1520 - lr: 1.0000e-05 - 514ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0332 - val_loss: 0.1510 - lr: 1.0000e-05 - 536ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0340 - val_loss: 0.1503 - lr: 1.0000e-05 - 534ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0313 - val_loss: 0.1495 - lr: 1.0000e-05 - 531ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0306 - val_loss: 0.1499 - lr: 1.0000e-05 - 513ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0352 - val_loss: 0.1484 - lr: 1.0000e-05 - 516ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0306 - val_loss: 0.1483 - lr: 1.0000e-05 - 509ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0324 - val_loss: 0.1472 - lr: 1.0000e-05 - 506ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0311 - val_loss: 0.1472 - lr: 1.0000e-05 - 526ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0322 - val_loss: 0.1460 - lr: 1.0000e-05 - 553ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0304 - val_loss: 0.1454 - lr: 1.0000e-05 - 509ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0297 - val_loss: 0.1445 - lr: 1.0000e-05 - 541ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0301 - val_loss: 0.1447 - lr: 1.0000e-05 - 520ms/epoch - 11ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00968 48/48 - 0s - loss: 0.0348 - val_loss: 0.1438 - lr: 1.0000e-05 - 488ms/epoch - 10ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0347 - val_loss: 0.1434 - lr: 1.0000e-05 - 509ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0301 - val_loss: 0.1431 - lr: 1.0000e-05 - 533ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0306 - val_loss: 0.1438 - lr: 1.0000e-05 - 538ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00968 48/48 - 1s - loss: 0.0315 - val_loss: 0.1442 - lr: 1.0000e-05 - 509ms/epoch - 11ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.20 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.92 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.51 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.26 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.451 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 13:06:14 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.14983, saving model to LSTM1.h5 16/16 - 2s - loss: 0.2317 - val_loss: 0.1498 - lr: 0.0010 - 2s/epoch - 128ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.14983 to 0.11662, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0996 - val_loss: 0.1166 - lr: 0.0010 - 226ms/epoch - 14ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.11662 16/16 - 0s - loss: 0.0823 - val_loss: 0.3269 - lr: 0.0010 - 194ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.11662 16/16 - 0s - loss: 0.0624 - val_loss: 0.1525 - lr: 0.0010 - 190ms/epoch - 12ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.11662 to 0.07705, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0569 - val_loss: 0.0770 - lr: 0.0010 - 194ms/epoch - 12ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.07705 to 0.07685, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0481 - val_loss: 0.0768 - lr: 0.0010 - 213ms/epoch - 13ms/step Epoch 7/500 Epoch 00007: val_loss improved from 0.07685 to 0.06080, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0399 - val_loss: 0.0608 - lr: 0.0010 - 211ms/epoch - 13ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.06080 16/16 - 0s - loss: 0.0392 - val_loss: 0.0633 - lr: 0.0010 - 185ms/epoch - 12ms/step Epoch 9/500 Epoch 00009: val_loss improved from 0.06080 to 0.05583, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0385 - val_loss: 0.0558 - lr: 0.0010 - 218ms/epoch - 14ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.05583 16/16 - 0s - loss: 0.0403 - val_loss: 0.0655 - lr: 0.0010 - 194ms/epoch - 12ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.05583 to 0.05034, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0368 - val_loss: 0.0503 - lr: 0.0010 - 239ms/epoch - 15ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.05034 16/16 - 0s - loss: 0.0384 - val_loss: 0.0756 - lr: 0.0010 - 187ms/epoch - 12ms/step Epoch 13/500 Epoch 00013: val_loss improved from 0.05034 to 0.04115, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0379 - val_loss: 0.0411 - lr: 0.0010 - 201ms/epoch - 13ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.04115 16/16 - 0s - loss: 0.0320 - val_loss: 0.0746 - lr: 0.0010 - 196ms/epoch - 12ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.04115 16/16 - 0s - loss: 0.0340 - val_loss: 0.0445 - lr: 0.0010 - 188ms/epoch - 12ms/step Epoch 16/500 Epoch 00016: val_loss improved from 0.04115 to 0.03450, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0293 - val_loss: 0.0345 - lr: 0.0010 - 225ms/epoch - 14ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.03450 16/16 - 0s - loss: 0.0317 - val_loss: 0.0606 - lr: 0.0010 - 192ms/epoch - 12ms/step Epoch 18/500 Epoch 00018: val_loss improved from 0.03450 to 0.02444, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0298 - val_loss: 0.0244 - lr: 0.0010 - 220ms/epoch - 14ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.02444 16/16 - 0s - loss: 0.0308 - val_loss: 0.0328 - lr: 0.0010 - 194ms/epoch - 12ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.02444 16/16 - 0s - loss: 0.0302 - val_loss: 0.0269 - lr: 0.0010 - 186ms/epoch - 12ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.02444 16/16 - 0s - loss: 0.0350 - val_loss: 0.0861 - lr: 0.0010 - 177ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss improved from 0.02444 to 0.01752, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0342 - val_loss: 0.0175 - lr: 0.0010 - 218ms/epoch - 14ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01752 16/16 - 0s - loss: 0.0264 - val_loss: 0.0209 - lr: 0.0010 - 199ms/epoch - 12ms/step Epoch 24/500 Epoch 00024: val_loss improved from 0.01752 to 0.01677, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0292 - val_loss: 0.0168 - lr: 0.0010 - 262ms/epoch - 16ms/step Epoch 25/500 Epoch 00025: val_loss improved from 0.01677 to 0.01673, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0264 - val_loss: 0.0167 - lr: 0.0010 - 225ms/epoch - 14ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0247 - val_loss: 0.0485 - lr: 0.0010 - 197ms/epoch - 12ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0275 - val_loss: 0.0171 - lr: 0.0010 - 200ms/epoch - 12ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0250 - val_loss: 0.0807 - lr: 0.0010 - 186ms/epoch - 12ms/step Epoch 29/500 Epoch 00029: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00029: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0218 - val_loss: 0.0172 - lr: 0.0010 - 192ms/epoch - 12ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0252 - val_loss: 0.0183 - lr: 1.0000e-04 - 189ms/epoch - 12ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0218 - val_loss: 0.0195 - lr: 1.0000e-04 - 201ms/epoch - 13ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0247 - val_loss: 0.0211 - lr: 1.0000e-04 - 188ms/epoch - 12ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0238 - val_loss: 0.0234 - lr: 1.0000e-04 - 194ms/epoch - 12ms/step Epoch 34/500 Epoch 00034: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00034: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0238 - val_loss: 0.0247 - lr: 1.0000e-04 - 199ms/epoch - 12ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0197 - val_loss: 0.0247 - lr: 1.0000e-05 - 210ms/epoch - 13ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0242 - val_loss: 0.0247 - lr: 1.0000e-05 - 194ms/epoch - 12ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0213 - val_loss: 0.0246 - lr: 1.0000e-05 - 200ms/epoch - 12ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0230 - val_loss: 0.0246 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step Epoch 39/500 Epoch 00039: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00039: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0248 - lr: 1.0000e-05 - 192ms/epoch - 12ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0206 - val_loss: 0.0248 - lr: 1.0000e-05 - 193ms/epoch - 12ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0204 - val_loss: 0.0247 - lr: 1.0000e-05 - 202ms/epoch - 13ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0204 - val_loss: 0.0244 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0220 - val_loss: 0.0244 - lr: 1.0000e-05 - 202ms/epoch - 13ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0210 - val_loss: 0.0242 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0222 - val_loss: 0.0242 - lr: 1.0000e-05 - 176ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0219 - val_loss: 0.0242 - lr: 1.0000e-05 - 185ms/epoch - 12ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0223 - val_loss: 0.0242 - lr: 1.0000e-05 - 195ms/epoch - 12ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0240 - lr: 1.0000e-05 - 209ms/epoch - 13ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0240 - lr: 1.0000e-05 - 177ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0241 - lr: 1.0000e-05 - 185ms/epoch - 12ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0205 - val_loss: 0.0244 - lr: 1.0000e-05 - 194ms/epoch - 12ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0210 - val_loss: 0.0243 - lr: 1.0000e-05 - 194ms/epoch - 12ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0233 - val_loss: 0.0243 - lr: 1.0000e-05 - 182ms/epoch - 11ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0221 - val_loss: 0.0243 - lr: 1.0000e-05 - 190ms/epoch - 12ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0213 - val_loss: 0.0243 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0231 - val_loss: 0.0242 - lr: 1.0000e-05 - 182ms/epoch - 11ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0221 - val_loss: 0.0241 - lr: 1.0000e-05 - 184ms/epoch - 11ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0222 - val_loss: 0.0242 - lr: 1.0000e-05 - 184ms/epoch - 11ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0242 - lr: 1.0000e-05 - 178ms/epoch - 11ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0208 - val_loss: 0.0243 - lr: 1.0000e-05 - 174ms/epoch - 11ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0194 - val_loss: 0.0246 - lr: 1.0000e-05 - 173ms/epoch - 11ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0206 - val_loss: 0.0249 - lr: 1.0000e-05 - 203ms/epoch - 13ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0221 - val_loss: 0.0247 - lr: 1.0000e-05 - 183ms/epoch - 11ms/step Epoch 64/500 Epoch 00064: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0205 - val_loss: 0.0246 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0246 - lr: 1.0000e-05 - 173ms/epoch - 11ms/step Epoch 66/500 Epoch 00066: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0213 - val_loss: 0.0245 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step Epoch 67/500 Epoch 00067: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0227 - val_loss: 0.0247 - lr: 1.0000e-05 - 189ms/epoch - 12ms/step Epoch 68/500 Epoch 00068: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0226 - val_loss: 0.0248 - lr: 1.0000e-05 - 200ms/epoch - 13ms/step Epoch 69/500 Epoch 00069: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0214 - val_loss: 0.0248 - lr: 1.0000e-05 - 205ms/epoch - 13ms/step Epoch 70/500 Epoch 00070: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0208 - val_loss: 0.0248 - lr: 1.0000e-05 - 204ms/epoch - 13ms/step Epoch 71/500 Epoch 00071: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0195 - val_loss: 0.0248 - lr: 1.0000e-05 - 189ms/epoch - 12ms/step Epoch 72/500 Epoch 00072: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0209 - val_loss: 0.0250 - lr: 1.0000e-05 - 189ms/epoch - 12ms/step Epoch 73/500 Epoch 00073: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0220 - val_loss: 0.0251 - lr: 1.0000e-05 - 202ms/epoch - 13ms/step Epoch 74/500 Epoch 00074: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0205 - val_loss: 0.0251 - lr: 1.0000e-05 - 204ms/epoch - 13ms/step Epoch 75/500 Epoch 00075: val_loss did not improve from 0.01673 16/16 - 0s - loss: 0.0219 - val_loss: 0.0251 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step Epoch 00075: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 44.2948843329178
RMSE: 6.655440205795392
MAPE: 5.1903345685841265
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.39 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.92 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.20 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.228 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 13:07:43 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.24073, saving model to LSTM1.h5 17/17 - 2s - loss: 0.3924 - val_loss: 0.2407 - lr: 0.0010 - 2s/epoch - 122ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.24073 to 0.21989, saving model to LSTM1.h5 17/17 - 0s - loss: 0.1053 - val_loss: 0.2199 - lr: 0.0010 - 229ms/epoch - 13ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.21989 17/17 - 0s - loss: 0.0880 - val_loss: 0.2684 - lr: 0.0010 - 207ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.21989 to 0.10733, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0788 - val_loss: 0.1073 - lr: 0.0010 - 228ms/epoch - 13ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.10733 to 0.09092, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0505 - val_loss: 0.0909 - lr: 0.0010 - 224ms/epoch - 13ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.09092 17/17 - 0s - loss: 0.0526 - val_loss: 0.1424 - lr: 0.0010 - 213ms/epoch - 13ms/step Epoch 7/500 Epoch 00007: val_loss improved from 0.09092 to 0.08077, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0341 - val_loss: 0.0808 - lr: 0.0010 - 233ms/epoch - 14ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.08077 17/17 - 0s - loss: 0.0459 - val_loss: 0.0854 - lr: 0.0010 - 216ms/epoch - 13ms/step Epoch 9/500 Epoch 00009: val_loss improved from 0.08077 to 0.07047, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0303 - val_loss: 0.0705 - lr: 0.0010 - 232ms/epoch - 14ms/step Epoch 10/500 Epoch 00010: val_loss improved from 0.07047 to 0.05722, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0309 - val_loss: 0.0572 - lr: 0.0010 - 208ms/epoch - 12ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.05722 17/17 - 0s - loss: 0.0284 - val_loss: 0.0977 - lr: 0.0010 - 200ms/epoch - 12ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.05722 to 0.03317, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0355 - val_loss: 0.0332 - lr: 0.0010 - 227ms/epoch - 13ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0288 - val_loss: 0.1245 - lr: 0.0010 - 219ms/epoch - 13ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0268 - val_loss: 0.0359 - lr: 0.0010 - 194ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0426 - val_loss: 0.0501 - lr: 0.0010 - 181ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0438 - val_loss: 0.1058 - lr: 0.0010 - 198ms/epoch - 12ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00017: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0333 - val_loss: 0.0357 - lr: 0.0010 - 193ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0485 - val_loss: 0.0392 - lr: 1.0000e-04 - 195ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0312 - val_loss: 0.0395 - lr: 1.0000e-04 - 190ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0303 - val_loss: 0.0414 - lr: 1.0000e-04 - 193ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0271 - val_loss: 0.0419 - lr: 1.0000e-04 - 189ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00022: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0267 - val_loss: 0.0463 - lr: 1.0000e-04 - 198ms/epoch - 12ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0248 - val_loss: 0.0464 - lr: 1.0000e-05 - 183ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0259 - val_loss: 0.0466 - lr: 1.0000e-05 - 184ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0246 - val_loss: 0.0466 - lr: 1.0000e-05 - 184ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0270 - val_loss: 0.0466 - lr: 1.0000e-05 - 187ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00027: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0230 - val_loss: 0.0472 - lr: 1.0000e-05 - 195ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0261 - val_loss: 0.0473 - lr: 1.0000e-05 - 199ms/epoch - 12ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0268 - val_loss: 0.0473 - lr: 1.0000e-05 - 205ms/epoch - 12ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0263 - val_loss: 0.0477 - lr: 1.0000e-05 - 188ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0241 - val_loss: 0.0478 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0250 - val_loss: 0.0478 - lr: 1.0000e-05 - 201ms/epoch - 12ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0241 - val_loss: 0.0480 - lr: 1.0000e-05 - 202ms/epoch - 12ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0244 - val_loss: 0.0483 - lr: 1.0000e-05 - 204ms/epoch - 12ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0266 - val_loss: 0.0488 - lr: 1.0000e-05 - 202ms/epoch - 12ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0255 - val_loss: 0.0492 - lr: 1.0000e-05 - 206ms/epoch - 12ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0261 - val_loss: 0.0497 - lr: 1.0000e-05 - 215ms/epoch - 13ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0266 - val_loss: 0.0500 - lr: 1.0000e-05 - 205ms/epoch - 12ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0252 - val_loss: 0.0499 - lr: 1.0000e-05 - 213ms/epoch - 13ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0278 - val_loss: 0.0502 - lr: 1.0000e-05 - 187ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0247 - val_loss: 0.0502 - lr: 1.0000e-05 - 208ms/epoch - 12ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0237 - val_loss: 0.0502 - lr: 1.0000e-05 - 223ms/epoch - 13ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0253 - val_loss: 0.0500 - lr: 1.0000e-05 - 212ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0255 - val_loss: 0.0497 - lr: 1.0000e-05 - 209ms/epoch - 12ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0250 - val_loss: 0.0498 - lr: 1.0000e-05 - 205ms/epoch - 12ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0261 - val_loss: 0.0498 - lr: 1.0000e-05 - 218ms/epoch - 13ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0271 - val_loss: 0.0497 - lr: 1.0000e-05 - 213ms/epoch - 13ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0245 - val_loss: 0.0498 - lr: 1.0000e-05 - 185ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0234 - val_loss: 0.0501 - lr: 1.0000e-05 - 208ms/epoch - 12ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0255 - val_loss: 0.0502 - lr: 1.0000e-05 - 202ms/epoch - 12ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0256 - val_loss: 0.0505 - lr: 1.0000e-05 - 199ms/epoch - 12ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0238 - val_loss: 0.0510 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0248 - val_loss: 0.0512 - lr: 1.0000e-05 - 208ms/epoch - 12ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0257 - val_loss: 0.0511 - lr: 1.0000e-05 - 191ms/epoch - 11ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0251 - val_loss: 0.0511 - lr: 1.0000e-05 - 188ms/epoch - 11ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0252 - val_loss: 0.0513 - lr: 1.0000e-05 - 199ms/epoch - 12ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0235 - val_loss: 0.0515 - lr: 1.0000e-05 - 205ms/epoch - 12ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0254 - val_loss: 0.0514 - lr: 1.0000e-05 - 203ms/epoch - 12ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0229 - val_loss: 0.0510 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0266 - val_loss: 0.0510 - lr: 1.0000e-05 - 211ms/epoch - 12ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0241 - val_loss: 0.0511 - lr: 1.0000e-05 - 192ms/epoch - 11ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.03317 17/17 - 0s - loss: 0.0247 - val_loss: 0.0511 - lr: 1.0000e-05 - 208ms/epoch - 12ms/step Epoch 00062: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 44.2948843329178
RMSE: 6.655440205795392
MAPE: 5.1903345685841265
WMA
Prediction vs Close: 56.72% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 34.81241095672678
RMSE: 5.900204314829002
MAPE: 4.770935413189914
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.38 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.11 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.20 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.74 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.24 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.069 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 13:09:08 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.05104, saving model to LSTM1.h5 10/10 - 2s - loss: 0.8102 - val_loss: 0.0510 - lr: 0.0010 - 2s/epoch - 211ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.05104 to 0.04974, saving model to LSTM1.h5 10/10 - 0s - loss: 0.1873 - val_loss: 0.0497 - lr: 0.0010 - 152ms/epoch - 15ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0745 - val_loss: 0.2709 - lr: 0.0010 - 127ms/epoch - 13ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0673 - val_loss: 0.2236 - lr: 0.0010 - 135ms/epoch - 13ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0665 - val_loss: 0.1066 - lr: 0.0010 - 130ms/epoch - 13ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0543 - val_loss: 0.0958 - lr: 0.0010 - 124ms/epoch - 12ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0501 - val_loss: 0.1133 - lr: 0.0010 - 135ms/epoch - 14ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0470 - val_loss: 0.1117 - lr: 1.0000e-04 - 138ms/epoch - 14ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0453 - val_loss: 0.1132 - lr: 1.0000e-04 - 128ms/epoch - 13ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0413 - val_loss: 0.1125 - lr: 1.0000e-04 - 125ms/epoch - 12ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0441 - val_loss: 0.1106 - lr: 1.0000e-04 - 143ms/epoch - 14ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0458 - val_loss: 0.1074 - lr: 1.0000e-04 - 148ms/epoch - 15ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0443 - val_loss: 0.1071 - lr: 1.0000e-05 - 126ms/epoch - 13ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0445 - val_loss: 0.1071 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0418 - val_loss: 0.1072 - lr: 1.0000e-05 - 137ms/epoch - 14ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0478 - val_loss: 0.1077 - lr: 1.0000e-05 - 131ms/epoch - 13ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0398 - val_loss: 0.1082 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0399 - val_loss: 0.1084 - lr: 1.0000e-05 - 143ms/epoch - 14ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0408 - val_loss: 0.1083 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0430 - val_loss: 0.1081 - lr: 1.0000e-05 - 131ms/epoch - 13ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0376 - val_loss: 0.1084 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0427 - val_loss: 0.1082 - lr: 1.0000e-05 - 145ms/epoch - 15ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0427 - val_loss: 0.1081 - lr: 1.0000e-05 - 142ms/epoch - 14ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0433 - val_loss: 0.1082 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0424 - val_loss: 0.1087 - lr: 1.0000e-05 - 138ms/epoch - 14ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0450 - val_loss: 0.1086 - lr: 1.0000e-05 - 143ms/epoch - 14ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0455 - val_loss: 0.1080 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0425 - val_loss: 0.1078 - lr: 1.0000e-05 - 148ms/epoch - 15ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0464 - val_loss: 0.1078 - lr: 1.0000e-05 - 125ms/epoch - 13ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0376 - val_loss: 0.1079 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0423 - val_loss: 0.1079 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0409 - val_loss: 0.1074 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0415 - val_loss: 0.1070 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0393 - val_loss: 0.1071 - lr: 1.0000e-05 - 130ms/epoch - 13ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0409 - val_loss: 0.1069 - lr: 1.0000e-05 - 129ms/epoch - 13ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0435 - val_loss: 0.1070 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0392 - val_loss: 0.1072 - lr: 1.0000e-05 - 128ms/epoch - 13ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0415 - val_loss: 0.1074 - lr: 1.0000e-05 - 130ms/epoch - 13ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0413 - val_loss: 0.1073 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0423 - val_loss: 0.1070 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0419 - val_loss: 0.1066 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0381 - val_loss: 0.1067 - lr: 1.0000e-05 - 135ms/epoch - 14ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0455 - val_loss: 0.1070 - lr: 1.0000e-05 - 122ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0418 - val_loss: 0.1074 - lr: 1.0000e-05 - 132ms/epoch - 13ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0376 - val_loss: 0.1074 - lr: 1.0000e-05 - 138ms/epoch - 14ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0443 - val_loss: 0.1067 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0413 - val_loss: 0.1063 - lr: 1.0000e-05 - 138ms/epoch - 14ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0415 - val_loss: 0.1058 - lr: 1.0000e-05 - 143ms/epoch - 14ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0415 - val_loss: 0.1054 - lr: 1.0000e-05 - 135ms/epoch - 14ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0465 - val_loss: 0.1049 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0386 - val_loss: 0.1048 - lr: 1.0000e-05 - 137ms/epoch - 14ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.04974 10/10 - 0s - loss: 0.0408 - val_loss: 0.1050 - lr: 1.0000e-05 - 139ms/epoch - 14ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 44.2948843329178
RMSE: 6.655440205795392
MAPE: 5.1903345685841265
WMA
Prediction vs Close: 56.72% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 34.81241095672678
RMSE: 5.900204314829002
MAPE: 4.770935413189914
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 52.107642174945944
RMSE: 7.2185623343534235
MAPE: 5.72607728989529
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.33 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.20 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.84 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.50 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.16 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.233 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 13:10:28 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.23368, saving model to LSTM1.h5 45/45 - 2s - loss: 0.2105 - val_loss: 0.2337 - lr: 0.0010 - 2s/epoch - 52ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.23368 45/45 - 0s - loss: 0.1425 - val_loss: 0.7288 - lr: 0.0010 - 495ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.23368 45/45 - 1s - loss: 0.1108 - val_loss: 0.9172 - lr: 0.0010 - 532ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.23368 to 0.01871, saving model to LSTM1.h5 45/45 - 1s - loss: 0.0518 - val_loss: 0.0187 - lr: 0.0010 - 537ms/epoch - 12ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.01871 45/45 - 0s - loss: 0.0458 - val_loss: 0.0336 - lr: 0.0010 - 482ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01871 45/45 - 1s - loss: 0.0412 - val_loss: 0.0357 - lr: 0.0010 - 540ms/epoch - 12ms/step Epoch 7/500 Epoch 00007: val_loss improved from 0.01871 to 0.01275, saving model to LSTM1.h5 45/45 - 1s - loss: 0.0406 - val_loss: 0.0127 - lr: 0.0010 - 528ms/epoch - 12ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01275 45/45 - 1s - loss: 0.0437 - val_loss: 0.1432 - lr: 0.0010 - 533ms/epoch - 12ms/step Epoch 9/500 Epoch 00009: val_loss improved from 0.01275 to 0.00765, saving model to LSTM1.h5 45/45 - 1s - loss: 0.0366 - val_loss: 0.0076 - lr: 0.0010 - 524ms/epoch - 12ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0295 - val_loss: 0.0165 - lr: 0.0010 - 507ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0316 - val_loss: 0.0088 - lr: 0.0010 - 493ms/epoch - 11ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0327 - val_loss: 0.0238 - lr: 0.0010 - 493ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0304 - val_loss: 0.0176 - lr: 0.0010 - 513ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00014: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0291 - val_loss: 0.0257 - lr: 0.0010 - 512ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0315 - val_loss: 0.0187 - lr: 1.0000e-04 - 488ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0276 - val_loss: 0.0148 - lr: 1.0000e-04 - 477ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0244 - val_loss: 0.0117 - lr: 1.0000e-04 - 516ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0244 - val_loss: 0.0117 - lr: 1.0000e-04 - 511ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00019: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0254 - val_loss: 0.0114 - lr: 1.0000e-04 - 499ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0283 - val_loss: 0.0114 - lr: 1.0000e-05 - 476ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0285 - val_loss: 0.0113 - lr: 1.0000e-05 - 501ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0254 - val_loss: 0.0114 - lr: 1.0000e-05 - 497ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0254 - val_loss: 0.0114 - lr: 1.0000e-05 - 479ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00024: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0297 - val_loss: 0.0114 - lr: 1.0000e-05 - 498ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0277 - val_loss: 0.0114 - lr: 1.0000e-05 - 463ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0241 - val_loss: 0.0114 - lr: 1.0000e-05 - 490ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0252 - val_loss: 0.0114 - lr: 1.0000e-05 - 507ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0257 - val_loss: 0.0114 - lr: 1.0000e-05 - 492ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0243 - val_loss: 0.0114 - lr: 1.0000e-05 - 477ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0237 - val_loss: 0.0114 - lr: 1.0000e-05 - 488ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0256 - val_loss: 0.0115 - lr: 1.0000e-05 - 479ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0263 - val_loss: 0.0115 - lr: 1.0000e-05 - 495ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0255 - val_loss: 0.0114 - lr: 1.0000e-05 - 504ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0235 - val_loss: 0.0115 - lr: 1.0000e-05 - 507ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0259 - val_loss: 0.0115 - lr: 1.0000e-05 - 501ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0245 - val_loss: 0.0115 - lr: 1.0000e-05 - 486ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0236 - val_loss: 0.0115 - lr: 1.0000e-05 - 513ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0250 - val_loss: 0.0115 - lr: 1.0000e-05 - 526ms/epoch - 12ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0230 - val_loss: 0.0116 - lr: 1.0000e-05 - 500ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0240 - val_loss: 0.0115 - lr: 1.0000e-05 - 528ms/epoch - 12ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0276 - val_loss: 0.0115 - lr: 1.0000e-05 - 492ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0244 - val_loss: 0.0116 - lr: 1.0000e-05 - 491ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0241 - val_loss: 0.0116 - lr: 1.0000e-05 - 489ms/epoch - 11ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0244 - val_loss: 0.0117 - lr: 1.0000e-05 - 481ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0232 - val_loss: 0.0117 - lr: 1.0000e-05 - 498ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0240 - val_loss: 0.0119 - lr: 1.0000e-05 - 525ms/epoch - 12ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0255 - val_loss: 0.0120 - lr: 1.0000e-05 - 496ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0273 - val_loss: 0.0122 - lr: 1.0000e-05 - 480ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0229 - val_loss: 0.0122 - lr: 1.0000e-05 - 497ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0247 - val_loss: 0.0123 - lr: 1.0000e-05 - 480ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0265 - val_loss: 0.0123 - lr: 1.0000e-05 - 508ms/epoch - 11ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0224 - val_loss: 0.0124 - lr: 1.0000e-05 - 492ms/epoch - 11ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0241 - val_loss: 0.0125 - lr: 1.0000e-05 - 486ms/epoch - 11ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0261 - val_loss: 0.0124 - lr: 1.0000e-05 - 494ms/epoch - 11ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0236 - val_loss: 0.0124 - lr: 1.0000e-05 - 482ms/epoch - 11ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0243 - val_loss: 0.0123 - lr: 1.0000e-05 - 478ms/epoch - 11ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0237 - val_loss: 0.0125 - lr: 1.0000e-05 - 497ms/epoch - 11ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00765 45/45 - 1s - loss: 0.0265 - val_loss: 0.0125 - lr: 1.0000e-05 - 514ms/epoch - 11ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00765 45/45 - 0s - loss: 0.0256 - val_loss: 0.0124 - lr: 1.0000e-05 - 489ms/epoch - 11ms/step Epoch 00059: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 44.2948843329178
RMSE: 6.655440205795392
MAPE: 5.1903345685841265
WMA
Prediction vs Close: 56.72% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 34.81241095672678
RMSE: 5.900204314829002
MAPE: 4.770935413189914
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 52.107642174945944
RMSE: 7.2185623343534235
MAPE: 5.72607728989529
KAMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 101.2314633840329
RMSE: 10.06138476473457
MAPE: 7.671150891933135
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.34 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.17 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.87 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.63 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.16 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.373 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 13:12:09 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.04090, saving model to LSTM1.h5 58/58 - 2s - loss: 0.1551 - val_loss: 0.0409 - lr: 0.0010 - 2s/epoch - 42ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.04090 to 0.03024, saving model to LSTM1.h5 58/58 - 1s - loss: 0.0955 - val_loss: 0.0302 - lr: 0.0010 - 665ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.03024 58/58 - 1s - loss: 0.0563 - val_loss: 0.0304 - lr: 0.0010 - 619ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.03024 58/58 - 1s - loss: 0.0576 - val_loss: 0.1880 - lr: 0.0010 - 631ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.03024 to 0.02879, saving model to LSTM1.h5 58/58 - 1s - loss: 0.0549 - val_loss: 0.0288 - lr: 0.0010 - 643ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.02879 58/58 - 1s - loss: 0.0463 - val_loss: 0.2561 - lr: 0.0010 - 606ms/epoch - 10ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.02879 58/58 - 1s - loss: 0.0456 - val_loss: 0.0770 - lr: 0.0010 - 630ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.02879 58/58 - 1s - loss: 0.0425 - val_loss: 0.0359 - lr: 0.0010 - 628ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.02879 58/58 - 1s - loss: 0.0396 - val_loss: 0.1726 - lr: 0.0010 - 614ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss improved from 0.02879 to 0.02841, saving model to LSTM1.h5 58/58 - 1s - loss: 0.0326 - val_loss: 0.0284 - lr: 0.0010 - 665ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.02841 to 0.01350, saving model to LSTM1.h5 58/58 - 1s - loss: 0.0343 - val_loss: 0.0135 - lr: 0.0010 - 631ms/epoch - 11ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0327 - val_loss: 0.0422 - lr: 0.0010 - 612ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0285 - val_loss: 0.0358 - lr: 0.0010 - 629ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0289 - val_loss: 0.0340 - lr: 0.0010 - 629ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0253 - val_loss: 0.0301 - lr: 0.0010 - 632ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00016: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0283 - val_loss: 0.0324 - lr: 0.0010 - 616ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0280 - val_loss: 0.0233 - lr: 1.0000e-04 - 628ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0223 - val_loss: 0.0194 - lr: 1.0000e-04 - 608ms/epoch - 10ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0239 - val_loss: 0.0183 - lr: 1.0000e-04 - 641ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0258 - val_loss: 0.0195 - lr: 1.0000e-04 - 615ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00021: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0207 - val_loss: 0.0214 - lr: 1.0000e-04 - 610ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0210 - val_loss: 0.0216 - lr: 1.0000e-05 - 618ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0221 - val_loss: 0.0219 - lr: 1.0000e-05 - 635ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0236 - val_loss: 0.0222 - lr: 1.0000e-05 - 635ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0214 - val_loss: 0.0225 - lr: 1.0000e-05 - 624ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00026: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0222 - val_loss: 0.0226 - lr: 1.0000e-05 - 648ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0196 - val_loss: 0.0226 - lr: 1.0000e-05 - 623ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0203 - val_loss: 0.0225 - lr: 1.0000e-05 - 617ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0207 - val_loss: 0.0226 - lr: 1.0000e-05 - 652ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0222 - val_loss: 0.0228 - lr: 1.0000e-05 - 608ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0210 - val_loss: 0.0234 - lr: 1.0000e-05 - 643ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0208 - val_loss: 0.0235 - lr: 1.0000e-05 - 605ms/epoch - 10ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0195 - val_loss: 0.0236 - lr: 1.0000e-05 - 611ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0212 - val_loss: 0.0236 - lr: 1.0000e-05 - 570ms/epoch - 10ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0209 - val_loss: 0.0239 - lr: 1.0000e-05 - 639ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0236 - val_loss: 0.0240 - lr: 1.0000e-05 - 625ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0217 - val_loss: 0.0237 - lr: 1.0000e-05 - 630ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0227 - val_loss: 0.0233 - lr: 1.0000e-05 - 639ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0216 - val_loss: 0.0231 - lr: 1.0000e-05 - 631ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0224 - val_loss: 0.0233 - lr: 1.0000e-05 - 612ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0217 - val_loss: 0.0237 - lr: 1.0000e-05 - 657ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0192 - val_loss: 0.0244 - lr: 1.0000e-05 - 634ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0214 - val_loss: 0.0247 - lr: 1.0000e-05 - 608ms/epoch - 10ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0195 - val_loss: 0.0249 - lr: 1.0000e-05 - 643ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0225 - val_loss: 0.0251 - lr: 1.0000e-05 - 645ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0196 - val_loss: 0.0252 - lr: 1.0000e-05 - 607ms/epoch - 10ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0204 - val_loss: 0.0251 - lr: 1.0000e-05 - 631ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0201 - val_loss: 0.0253 - lr: 1.0000e-05 - 623ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0191 - val_loss: 0.0251 - lr: 1.0000e-05 - 638ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0204 - val_loss: 0.0252 - lr: 1.0000e-05 - 626ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0205 - val_loss: 0.0251 - lr: 1.0000e-05 - 600ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0221 - val_loss: 0.0244 - lr: 1.0000e-05 - 646ms/epoch - 11ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0216 - val_loss: 0.0241 - lr: 1.0000e-05 - 638ms/epoch - 11ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0211 - val_loss: 0.0240 - lr: 1.0000e-05 - 609ms/epoch - 10ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0205 - val_loss: 0.0245 - lr: 1.0000e-05 - 617ms/epoch - 11ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0194 - val_loss: 0.0243 - lr: 1.0000e-05 - 650ms/epoch - 11ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0197 - val_loss: 0.0237 - lr: 1.0000e-05 - 642ms/epoch - 11ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0213 - val_loss: 0.0237 - lr: 1.0000e-05 - 625ms/epoch - 11ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0203 - val_loss: 0.0245 - lr: 1.0000e-05 - 598ms/epoch - 10ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0214 - val_loss: 0.0241 - lr: 1.0000e-05 - 635ms/epoch - 11ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.01350 58/58 - 1s - loss: 0.0209 - val_loss: 0.0246 - lr: 1.0000e-05 - 590ms/epoch - 10ms/step Epoch 00061: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 44.2948843329178
RMSE: 6.655440205795392
MAPE: 5.1903345685841265
WMA
Prediction vs Close: 56.72% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 34.81241095672678
RMSE: 5.900204314829002
MAPE: 4.770935413189914
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 52.107642174945944
RMSE: 7.2185623343534235
MAPE: 5.72607728989529
KAMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 101.2314633840329
RMSE: 10.06138476473457
MAPE: 7.671150891933135
MIDPOINT
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 120.91184599154492
RMSE: 10.995992269529154
MAPE: 9.137686493675425
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.04 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.74 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.47 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.17 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.211 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 13:14:10 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.01665, saving model to LSTM1.h5 43/43 - 3s - loss: 0.3626 - val_loss: 0.0167 - lr: 0.0010 - 3s/epoch - 61ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.1414 - val_loss: 0.7987 - lr: 0.0010 - 452ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0695 - val_loss: 0.4506 - lr: 0.0010 - 461ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.01665 43/43 - 1s - loss: 0.0630 - val_loss: 0.3210 - lr: 0.0010 - 503ms/epoch - 12ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0495 - val_loss: 0.1193 - lr: 0.0010 - 491ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0459 - val_loss: 0.1564 - lr: 0.0010 - 497ms/epoch - 12ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0563 - val_loss: 0.1495 - lr: 1.0000e-04 - 469ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0389 - val_loss: 0.1381 - lr: 1.0000e-04 - 488ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0338 - val_loss: 0.1357 - lr: 1.0000e-04 - 488ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0364 - val_loss: 0.1354 - lr: 1.0000e-04 - 470ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0355 - val_loss: 0.1289 - lr: 1.0000e-04 - 440ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0373 - val_loss: 0.1286 - lr: 1.0000e-05 - 487ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0350 - val_loss: 0.1285 - lr: 1.0000e-05 - 478ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0376 - val_loss: 0.1275 - lr: 1.0000e-05 - 486ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0348 - val_loss: 0.1263 - lr: 1.0000e-05 - 479ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0342 - val_loss: 0.1259 - lr: 1.0000e-05 - 481ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0372 - val_loss: 0.1257 - lr: 1.0000e-05 - 480ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0354 - val_loss: 0.1256 - lr: 1.0000e-05 - 463ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0331 - val_loss: 0.1254 - lr: 1.0000e-05 - 457ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0353 - val_loss: 0.1266 - lr: 1.0000e-05 - 471ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0351 - val_loss: 0.1278 - lr: 1.0000e-05 - 472ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0363 - val_loss: 0.1266 - lr: 1.0000e-05 - 463ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0340 - val_loss: 0.1259 - lr: 1.0000e-05 - 486ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0319 - val_loss: 0.1255 - lr: 1.0000e-05 - 468ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0361 - val_loss: 0.1252 - lr: 1.0000e-05 - 471ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0352 - val_loss: 0.1254 - lr: 1.0000e-05 - 462ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0349 - val_loss: 0.1266 - lr: 1.0000e-05 - 470ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0397 - val_loss: 0.1264 - lr: 1.0000e-05 - 487ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0318 - val_loss: 0.1269 - lr: 1.0000e-05 - 488ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0344 - val_loss: 0.1264 - lr: 1.0000e-05 - 471ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0310 - val_loss: 0.1264 - lr: 1.0000e-05 - 478ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0332 - val_loss: 0.1250 - lr: 1.0000e-05 - 475ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0356 - val_loss: 0.1249 - lr: 1.0000e-05 - 453ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0347 - val_loss: 0.1252 - lr: 1.0000e-05 - 499ms/epoch - 12ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0345 - val_loss: 0.1239 - lr: 1.0000e-05 - 475ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01665 43/43 - 1s - loss: 0.0343 - val_loss: 0.1241 - lr: 1.0000e-05 - 520ms/epoch - 12ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0370 - val_loss: 0.1235 - lr: 1.0000e-05 - 445ms/epoch - 10ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0391 - val_loss: 0.1238 - lr: 1.0000e-05 - 468ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0361 - val_loss: 0.1240 - lr: 1.0000e-05 - 449ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0383 - val_loss: 0.1246 - lr: 1.0000e-05 - 463ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0342 - val_loss: 0.1245 - lr: 1.0000e-05 - 485ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0379 - val_loss: 0.1233 - lr: 1.0000e-05 - 465ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0311 - val_loss: 0.1220 - lr: 1.0000e-05 - 496ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0370 - val_loss: 0.1218 - lr: 1.0000e-05 - 452ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0359 - val_loss: 0.1224 - lr: 1.0000e-05 - 474ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0323 - val_loss: 0.1219 - lr: 1.0000e-05 - 455ms/epoch - 11ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0328 - val_loss: 0.1222 - lr: 1.0000e-05 - 479ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0339 - val_loss: 0.1208 - lr: 1.0000e-05 - 468ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01665 43/43 - 1s - loss: 0.0327 - val_loss: 0.1212 - lr: 1.0000e-05 - 510ms/epoch - 12ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0354 - val_loss: 0.1205 - lr: 1.0000e-05 - 471ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01665 43/43 - 0s - loss: 0.0366 - val_loss: 0.1210 - lr: 1.0000e-05 - 489ms/epoch - 11ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 29.531169515594907
RMSE: 5.434258874547192
MAPE: 4.511922179897357
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 44.2948843329178
RMSE: 6.655440205795392
MAPE: 5.1903345685841265
WMA
Prediction vs Close: 56.72% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 34.81241095672678
RMSE: 5.900204314829002
MAPE: 4.770935413189914
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 52.107642174945944
RMSE: 7.2185623343534235
MAPE: 5.72607728989529
KAMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 101.2314633840329
RMSE: 10.06138476473457
MAPE: 7.671150891933135
MIDPOINT
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 120.91184599154492
RMSE: 10.995992269529154
MAPE: 9.137686493675425
T3
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 41.51394815576297
RMSE: 6.443131859256255
MAPE: 5.507945991108928
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.42 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.06 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.84 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.57 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.14 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.315 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 13:15:36 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.25314, saving model to LSTM1.h5 90/90 - 3s - loss: 0.1518 - val_loss: 0.2531 - lr: 0.0010 - 3s/epoch - 31ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.25314 90/90 - 1s - loss: 0.1497 - val_loss: 0.3602 - lr: 0.0010 - 967ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.25314 90/90 - 1s - loss: 0.0931 - val_loss: 0.6540 - lr: 0.0010 - 1s/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.25314 to 0.05296, saving model to LSTM1.h5 90/90 - 1s - loss: 0.0483 - val_loss: 0.0530 - lr: 0.0010 - 1s/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.05296 to 0.01559, saving model to LSTM1.h5 90/90 - 1s - loss: 0.0433 - val_loss: 0.0156 - lr: 0.0010 - 1s/epoch - 11ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0839 - val_loss: 0.2820 - lr: 0.0010 - 1s/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0469 - val_loss: 0.0236 - lr: 0.0010 - 993ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0354 - val_loss: 0.0255 - lr: 0.0010 - 968ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0317 - val_loss: 0.1175 - lr: 0.0010 - 930ms/epoch - 10ms/step Epoch 10/500 Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00010: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0339 - val_loss: 0.0198 - lr: 0.0010 - 964ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0410 - val_loss: 0.0293 - lr: 1.0000e-04 - 1s/epoch - 11ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0274 - val_loss: 0.0284 - lr: 1.0000e-04 - 982ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0260 - val_loss: 0.0286 - lr: 1.0000e-04 - 997ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0252 - val_loss: 0.0239 - lr: 1.0000e-04 - 944ms/epoch - 10ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0254 - val_loss: 0.0185 - lr: 1.0000e-04 - 943ms/epoch - 10ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0253 - val_loss: 0.0190 - lr: 1.0000e-05 - 953ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0252 - val_loss: 0.0192 - lr: 1.0000e-05 - 929ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0240 - val_loss: 0.0195 - lr: 1.0000e-05 - 963ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0229 - val_loss: 0.0204 - lr: 1.0000e-05 - 860ms/epoch - 10ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0231 - val_loss: 0.0212 - lr: 1.0000e-05 - 977ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0239 - val_loss: 0.0210 - lr: 1.0000e-05 - 955ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0235 - val_loss: 0.0209 - lr: 1.0000e-05 - 921ms/epoch - 10ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0242 - val_loss: 0.0212 - lr: 1.0000e-05 - 938ms/epoch - 10ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0221 - val_loss: 0.0208 - lr: 1.0000e-05 - 942ms/epoch - 10ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0219 - val_loss: 0.0201 - lr: 1.0000e-05 - 970ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0234 - val_loss: 0.0200 - lr: 1.0000e-05 - 1s/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0228 - val_loss: 0.0202 - lr: 1.0000e-05 - 963ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0242 - val_loss: 0.0200 - lr: 1.0000e-05 - 939ms/epoch - 10ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0244 - val_loss: 0.0196 - lr: 1.0000e-05 - 949ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0253 - val_loss: 0.0199 - lr: 1.0000e-05 - 942ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0226 - val_loss: 0.0213 - lr: 1.0000e-05 - 970ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0231 - val_loss: 0.0222 - lr: 1.0000e-05 - 977ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0245 - val_loss: 0.0231 - lr: 1.0000e-05 - 992ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0216 - val_loss: 0.0234 - lr: 1.0000e-05 - 959ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0226 - val_loss: 0.0237 - lr: 1.0000e-05 - 985ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0241 - val_loss: 0.0238 - lr: 1.0000e-05 - 983ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0232 - val_loss: 0.0237 - lr: 1.0000e-05 - 998ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0250 - val_loss: 0.0240 - lr: 1.0000e-05 - 988ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0243 - val_loss: 0.0240 - lr: 1.0000e-05 - 941ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0229 - val_loss: 0.0242 - lr: 1.0000e-05 - 978ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0210 - val_loss: 0.0239 - lr: 1.0000e-05 - 952ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0214 - val_loss: 0.0232 - lr: 1.0000e-05 - 955ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0238 - val_loss: 0.0230 - lr: 1.0000e-05 - 941ms/epoch - 10ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0223 - val_loss: 0.0218 - lr: 1.0000e-05 - 916ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0214 - val_loss: 0.0215 - lr: 1.0000e-05 - 951ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0213 - val_loss: 0.0216 - lr: 1.0000e-05 - 972ms/epoch - 11ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0231 - val_loss: 0.0227 - lr: 1.0000e-05 - 977ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0214 - val_loss: 0.0241 - lr: 1.0000e-05 - 962ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0226 - val_loss: 0.0243 - lr: 1.0000e-05 - 905ms/epoch - 10ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0221 - val_loss: 0.0238 - lr: 1.0000e-05 - 982ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0212 - val_loss: 0.0232 - lr: 1.0000e-05 - 1s/epoch - 11ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0214 - val_loss: 0.0243 - lr: 1.0000e-05 - 981ms/epoch - 11ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0213 - val_loss: 0.0233 - lr: 1.0000e-05 - 941ms/epoch - 10ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0224 - val_loss: 0.0222 - lr: 1.0000e-05 - 983ms/epoch - 11ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.01559 90/90 - 1s - loss: 0.0227 - val_loss: 0.0225 - lr: 1.0000e-05 - 989ms/epoch - 11ms/step Epoch 00055: early stopping
SMA Prediction vs Close: 52.61% Accuracy Prediction vs Prediction: 52.24% Accuracy MSE: 29.531169515594907 RMSE: 5.434258874547192 MAPE: 4.511922179897357 EMA Prediction vs Close: 57.09% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 44.2948843329178 RMSE: 6.655440205795392 MAPE: 5.1903345685841265 WMA Prediction vs Close: 56.72% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 34.81241095672678 RMSE: 5.900204314829002 MAPE: 4.770935413189914 DEMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 52.107642174945944 RMSE: 7.2185623343534235 MAPE: 5.72607728989529 KAMA Prediction vs Close: 54.48% Accuracy Prediction vs Prediction: 49.63% Accuracy MSE: 101.2314633840329 RMSE: 10.06138476473457 MAPE: 7.671150891933135 MIDPOINT Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 45.15% Accuracy MSE: 120.91184599154492 RMSE: 10.995992269529154 MAPE: 9.137686493675425 T3 Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 50.75% Accuracy MSE: 41.51394815576297 RMSE: 6.443131859256255 MAPE: 5.507945991108928 TEMA Prediction vs Close: 50.37% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 72.3302204365722 RMSE: 8.504717540081634 MAPE: 7.413730210152267 Runtime: mins: 13.87277515
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment1.png to Experiment1 (2).png
img = cv2.imread('Experiment1.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fb48e943550>
Excess kurtosis is a metric that compares the kurtosis of a distribution against the kurtosis of a normal distribution. The kurtosis of a normal distribution equals 3. Therefore, the excess kurtosis is found using the formula below:
Excess Kurtosis = Kurtosis – 3
np.save("X_train_appl.npy", X_train)
np.save("y_train_appl.npy", y_train)
np.save("X_test_appl.npy", X_test)
np.save("y_test_appl.npy", y_test)
np.save("yc_train_appl.npy", yc_train)
np.save("yc_test_appl.npy", yc_test)
np.save('index_train_appl.npy', index_train)
np.save('index_test_appl.npy', index_test)
list(simulation1.keys())
['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA']
with open('simulation1_data.json') as json_file:
simulation1 = json.load(json_file)
fileimg = 'Experiment1'
for i in range(len(list(simulation1.keys()))):
SIM = list(simulation1.keys())[i]
plot_train(simulation1,SIM)
plot_test(simulation1,SIM)
----- Train RMSE for SMA ----- 7.935503486463878 ----- Train_MSE_LSTM for SMA ----- 62.97221558368037 ----- Train MAE LSTM for SMA ----- 6.979776083083713
----- Test RMSE for SMA----- 5.434258874547192 ----- Test_MSE_LSTM for SMA----- 29.531169515594907 ----- Test_MAE_LSTM for SMA----- 4.511922179897357
----- Train RMSE for EMA ----- 9.402372564274692 ----- Train_MSE_LSTM for EMA ----- 88.40460983742544 ----- Train MAE LSTM for EMA ----- 8.28586532352354
----- Test RMSE for EMA----- 6.655440205795392 ----- Test_MSE_LSTM for EMA----- 44.2948843329178 ----- Test_MAE_LSTM for EMA----- 5.1903345685841265
----- Train RMSE for WMA ----- 9.895902209840198 ----- Train_MSE_LSTM for WMA ----- 97.92888054672011 ----- Train MAE LSTM for WMA ----- 8.733617713903124
----- Test RMSE for WMA----- 5.900204314829002 ----- Test_MSE_LSTM for WMA----- 34.81241095672678 ----- Test_MAE_LSTM for WMA----- 4.770935413189914
----- Train RMSE for DEMA ----- 10.759004484921052 ----- Train_MSE_LSTM for DEMA ----- 115.75617750655131 ----- Train MAE LSTM for DEMA ----- 9.577381117820352
----- Test RMSE for DEMA----- 7.2185623343534235 ----- Test_MSE_LSTM for DEMA----- 52.107642174945944 ----- Test_MAE_LSTM for DEMA----- 5.72607728989529
----- Train RMSE for KAMA ----- 9.565138688518502 ----- Train_MSE_LSTM for KAMA ----- 91.49187813059346 ----- Train MAE LSTM for KAMA ----- 8.554695393558186
----- Test RMSE for KAMA----- 10.06138476473457 ----- Test_MSE_LSTM for KAMA----- 101.2314633840329 ----- Test_MAE_LSTM for KAMA----- 7.671150891933135
----- Train RMSE for MIDPOINT ----- 8.472630816745282 ----- Train_MSE_LSTM for MIDPOINT ----- 71.78547295686181 ----- Train MAE LSTM for MIDPOINT ----- 7.568616716744433
----- Test RMSE for MIDPOINT----- 10.995992269529154 ----- Test_MSE_LSTM for MIDPOINT----- 120.91184599154492 ----- Test_MAE_LSTM for MIDPOINT----- 9.137686493675425
----- Train RMSE for T3 ----- 10.697204443760125 ----- Train_MSE_LSTM for T3 ----- 114.43018291160138 ----- Train MAE LSTM for T3 ----- 9.607836069383296
----- Test RMSE for T3----- 6.443131859256255 ----- Test_MSE_LSTM for T3----- 41.51394815576297 ----- Test_MAE_LSTM for T3----- 5.507945991108928
----- Train RMSE for TEMA ----- 6.840259844014339 ----- Train_MSE_LSTM for TEMA ----- 46.78915473363507 ----- Train MAE LSTM for TEMA ----- 4.502232091756086
----- Test RMSE for TEMA----- 8.504717540081634 ----- Test_MSE_LSTM for TEMA----- 72.3302204365722 ----- Test_MAE_LSTM for TEMA----- 7.413730210152267
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# # Option 1
# # Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# option 2
model = Sequential()
model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
model.add(Dense(64))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(learning_rate = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM2.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 3
# define custom activation
#
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation2 = {}
imgfile = 'Experiment2'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation2[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation2_data.json', 'w') as fp:
json.dump(simulation2, fp)
for ma in simulation2.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation2[ma]['final']['mse'],
'\nRMSE:\t', simulation2[ma]['final']['rmse'],
'\nMAPE:\t', simulation2[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.44 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.14 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.53 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.57 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.15 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.049 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 13:21:51 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.15003, saving model to LSTM2.h5
48/48 - 6s - loss: 0.1416 - accuracy: 0.0000e+00 - val_loss: 0.1500 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 6s/epoch - 133ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.15003 to 0.00567, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0666 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 359ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.00567
48/48 - 0s - loss: 0.0245 - accuracy: 0.0000e+00 - val_loss: 0.1272 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 318ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.00567
48/48 - 0s - loss: 0.0281 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.00567
48/48 - 0s - loss: 0.0131 - accuracy: 0.0000e+00 - val_loss: 0.1042 - val_accuracy: 0.0037 - lr: 0.0010 - 328ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.00567 to 0.00338, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0129 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 0.0010 - 327ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0397 - val_accuracy: 0.0037 - lr: 0.0010 - 309ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 0.0010 - 311ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 0.0010 - 328ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00011: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 0.0010 - 322ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 321ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 330ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 336ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 325ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00016: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 325ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 329ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00021: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 326ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00338
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.20 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.04 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.51 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.26 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.567 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 13:23:19 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04754, saving model to LSTM2.h5
16/16 - 5s - loss: 0.0550 - accuracy: 0.0000e+00 - val_loss: 0.0475 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 295ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.04754 to 0.04458, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0050 - accuracy: 0.0000e+00 - val_loss: 0.0446 - val_accuracy: 0.0037 - lr: 0.0010 - 158ms/epoch - 10ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.04458 to 0.01126, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0138 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 0.0010 - 159ms/epoch - 10ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01126
16/16 - 0s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.1306 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01126
16/16 - 0s - loss: 0.0341 - accuracy: 0.0000e+00 - val_loss: 0.0492 - val_accuracy: 0.0037 - lr: 0.0010 - 130ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.01126
16/16 - 0s - loss: 0.0235 - accuracy: 0.0000e+00 - val_loss: 0.1911 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 136ms/epoch - 9ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.01126
16/16 - 0s - loss: 0.0638 - accuracy: 0.0000e+00 - val_loss: 0.0348 - val_accuracy: 0.0037 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.01126 to 0.00602, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0400 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 154ms/epoch - 10ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00602
16/16 - 0s - loss: 0.0047 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 0.0010 - 121ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00602
16/16 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.00602 to 0.00571, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 160ms/epoch - 10ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.00571 to 0.00529, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 157ms/epoch - 10ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00529
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.00529 to 0.00528, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 159ms/epoch - 10ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00528
16/16 - 0s - loss: 9.0025e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 124ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.00528 to 0.00527, saving model to LSTM2.h5
16/16 - 0s - loss: 8.7937e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 152ms/epoch - 9ms/step
Epoch 17/500
Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00017: val_loss improved from 0.00527 to 0.00524, saving model to LSTM2.h5
16/16 - 0s - loss: 8.7268e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 0.0010 - 157ms/epoch - 10ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.2758e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.2223e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1901e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1682e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 22/500
Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00022: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1479e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 130ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1269e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1248e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1227e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1206e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 27/500
Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00027: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1184e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1162e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1140e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1117e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1094e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1071e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1048e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.1024e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0999e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0975e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0950e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0924e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0899e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0873e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0846e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0820e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0793e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0766e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0710e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0682e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0653e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0624e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0595e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0566e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0536e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0506e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0475e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0413e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0382e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0350e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0318e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0286e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0253e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0220e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0187e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0154e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0120e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0086e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00524
16/16 - 0s - loss: 8.0051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 00067: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 63.66877348603133
RMSE: 7.979271488427457
MAPE: 6.567170782771208
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.54 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.39 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.23 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.925 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 13:24:45 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.12010, saving model to LSTM2.h5
17/17 - 5s - loss: 0.1148 - accuracy: 0.0000e+00 - val_loss: 0.1201 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 308ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.12010 to 0.06339, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0413 - accuracy: 0.0000e+00 - val_loss: 0.0634 - val_accuracy: 0.0037 - lr: 0.0010 - 152ms/epoch - 9ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.06339 to 0.00588, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0254 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 152ms/epoch - 9ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.00588
17/17 - 0s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.00588
17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 0.0010 - 140ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00588
17/17 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00588
17/17 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 0.0010 - 124ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00008: val_loss did not improve from 0.00588
17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00588
17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 133ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.9539e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 129ms/epoch - 8ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.7712e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 132ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.6213e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 136ms/epoch - 8ms/step
Epoch 13/500
Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00013: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.6037e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 132ms/epoch - 8ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.5004e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4966e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4931e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4897e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00018: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4862e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4826e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4789e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4750e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4711e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4671e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4629e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4587e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4544e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4500e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4456e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4411e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4365e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4318e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4270e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4222e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4173e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4124e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4073e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.4022e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3971e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3918e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3865e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3811e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3757e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3701e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3645e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3589e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3531e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3473e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3414e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3355e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3295e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3234e-04 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3172e-04 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00588
17/17 - 0s - loss: 9.3110e-04 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 63.66877348603133
RMSE: 7.979271488427457
MAPE: 6.567170782771208
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 74.84193590201411
RMSE: 8.65112338959595
MAPE: 6.92726320779593
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.06 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.10 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.73 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.24 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.897 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 13:26:17 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.06980, saving model to LSTM2.h5
10/10 - 5s - loss: 0.2671 - accuracy: 0.0000e+00 - val_loss: 0.0698 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 512ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.06980 to 0.01999, saving model to LSTM2.h5
10/10 - 0s - loss: 0.0727 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 0.0010 - 126ms/epoch - 13ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01999
10/10 - 0s - loss: 0.0179 - accuracy: 0.0000e+00 - val_loss: 0.0490 - val_accuracy: 0.0037 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01999
10/10 - 0s - loss: 0.0061 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01999
10/10 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0213 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01999 to 0.01766, saving model to LSTM2.h5
10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0177 - val_accuracy: 0.0037 - lr: 0.0010 - 118ms/epoch - 12ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0220 - val_accuracy: 0.0037 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0247 - val_accuracy: 0.0037 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0249 - val_accuracy: 0.0037 - lr: 0.0010 - 89ms/epoch - 9ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0239 - val_accuracy: 0.0037 - lr: 0.0010 - 95ms/epoch - 10ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00011: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0252 - val_accuracy: 0.0037 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0253 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 94ms/epoch - 9ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0255 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 98ms/epoch - 10ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0256 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 96ms/epoch - 10ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0258 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 88ms/epoch - 9ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00016: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 91ms/epoch - 9ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00021: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0265 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0265 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0265 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0265 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0265 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0265 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01766
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 63.66877348603133
RMSE: 7.979271488427457
MAPE: 6.567170782771208
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 74.84193590201411
RMSE: 8.65112338959595
MAPE: 6.92726320779593
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 124.07774757087437
RMSE: 11.139019147612341
MAPE: 9.962964959911572
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.32 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.21 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.35 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.57 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.26 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.913 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 13:27:32 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.06338, saving model to LSTM2.h5
45/45 - 5s - loss: 0.1377 - accuracy: 0.0000e+00 - val_loss: 0.0634 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 110ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.06338 to 0.01627, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0407 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 0.0010 - 326ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01627
45/45 - 0s - loss: 0.0241 - accuracy: 0.0000e+00 - val_loss: 0.1140 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 305ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01627
45/45 - 0s - loss: 0.0322 - accuracy: 0.0000e+00 - val_loss: 0.0343 - val_accuracy: 0.0037 - lr: 0.0010 - 316ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01627
45/45 - 0s - loss: 0.0123 - accuracy: 0.0000e+00 - val_loss: 0.0926 - val_accuracy: 0.0037 - lr: 0.0010 - 312ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01627 to 0.00730, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0156 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 0.0010 - 329ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00730
45/45 - 0s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0354 - val_accuracy: 0.0037 - lr: 0.0010 - 304ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.00730 to 0.00352, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0058 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 0.0010 - 324ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 0.0010 - 288ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 289ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 0.0010 - 304ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00013: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 316ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 303ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.7117e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 303ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00352
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 310ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.9463e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 307ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00018: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.8026e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 297ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.3194e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.2077e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.1439e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.1094e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00023: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0876e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0714e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0575e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0448e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0325e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0203e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00352
45/45 - 0s - loss: 9.0080e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9955e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9699e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9566e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9432e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9294e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9154e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.9011e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.8866e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.8718e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.8567e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.8414e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.8258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.8100e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.7939e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.7775e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.7610e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.7441e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.7271e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.7098e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.6923e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.6745e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.6566e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.6384e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.6200e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.6014e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.5826e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.5636e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00352
45/45 - 0s - loss: 8.5444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 63.66877348603133
RMSE: 7.979271488427457
MAPE: 6.567170782771208
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 74.84193590201411
RMSE: 8.65112338959595
MAPE: 6.92726320779593
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 124.07774757087437
RMSE: 11.139019147612341
MAPE: 9.962964959911572
KAMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 64.92528911521055
RMSE: 8.057623043752454
MAPE: 6.682416615913553
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.34 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.17 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.12 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.49 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.64 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.25 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.137 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 13:29:18 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.08475, saving model to LSTM2.h5
58/58 - 5s - loss: 0.1799 - accuracy: 0.0000e+00 - val_loss: 0.0847 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 87ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.08475 to 0.00730, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0231 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 0.0010 - 422ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.00730
58/58 - 0s - loss: 0.0222 - accuracy: 0.0000e+00 - val_loss: 0.0332 - val_accuracy: 0.0037 - lr: 0.0010 - 399ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.00730
58/58 - 0s - loss: 0.0097 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 0.0010 - 404ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.00730
58/58 - 0s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0242 - val_accuracy: 0.0037 - lr: 0.0010 - 375ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.00730 to 0.00502, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0084 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 428ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 0.0010 - 394ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0078 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 389ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0090 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 0.0010 - 397ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0138 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 382ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00011: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0162 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 0.0010 - 400ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0227 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 394ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 383ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 392ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00502
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 383ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.00502 to 0.00479, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 416ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.00479 to 0.00425, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 424ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.00425 to 0.00389, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 425ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.00389 to 0.00367, saving model to LSTM2.h5
58/58 - 0s - loss: 9.8093e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 416ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.00367 to 0.00355, saving model to LSTM2.h5
58/58 - 0s - loss: 9.5600e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 422ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.00355 to 0.00350, saving model to LSTM2.h5
58/58 - 0s - loss: 9.3938e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 418ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.00350 to 0.00350, saving model to LSTM2.h5
58/58 - 0s - loss: 9.2669e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 407ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00350
58/58 - 0s - loss: 9.1567e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 391ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00350
58/58 - 0s - loss: 9.0528e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 383ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00025: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.9508e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 405ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.9635e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.5300e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.3724e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 394ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00350
58/58 - 1s - loss: 8.3141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 654ms/epoch - 11ms/step
Epoch 30/500
Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00030: val_loss did not improve from 0.00350
58/58 - 1s - loss: 8.2881e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 669ms/epoch - 12ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2731e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 378ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2621e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2526e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 376ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2436e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 377ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2347e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 385ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 384ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2167e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.2074e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1978e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1880e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 395ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1779e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1676e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1570e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1462e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 368ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1350e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 385ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1236e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1119e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.1000e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 405ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0878e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 375ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0753e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0626e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0496e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0364e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 410ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0229e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 374ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00350
58/58 - 0s - loss: 8.0091e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9951e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 386ms/epoch - 7ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 7ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9664e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9517e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 377ms/epoch - 7ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9368e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 394ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.9062e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.8907e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.8749e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 378ms/epoch - 7ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.8589e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 398ms/epoch - 7ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.8427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.8264e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 7ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.8098e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 398ms/epoch - 7ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.7932e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 398ms/epoch - 7ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.7763e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.7593e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 7ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.00350
58/58 - 0s - loss: 7.7422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 7ms/step
Epoch 00072: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 63.66877348603133
RMSE: 7.979271488427457
MAPE: 6.567170782771208
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 74.84193590201411
RMSE: 8.65112338959595
MAPE: 6.92726320779593
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 124.07774757087437
RMSE: 11.139019147612341
MAPE: 9.962964959911572
KAMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 64.92528911521055
RMSE: 8.057623043752454
MAPE: 6.682416615913553
MIDPOINT
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 68.19255604013144
RMSE: 8.25787842246006
MAPE: 6.72839330666561
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.34 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.27 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.03 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.44 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.13 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.401 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 13:31:04 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.00805, saving model to LSTM2.h5
43/43 - 5s - loss: 0.1271 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 112ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.00805
43/43 - 0s - loss: 0.0446 - accuracy: 0.0000e+00 - val_loss: 0.0650 - val_accuracy: 0.0037 - lr: 0.0010 - 309ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.00805
43/43 - 0s - loss: 0.0053 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 0.0010 - 305ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.00805 to 0.00534, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0145 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 331ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0373 - val_accuracy: 0.0037 - lr: 0.0010 - 301ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0152 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 0.0010 - 304ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0082 - accuracy: 0.0000e+00 - val_loss: 0.0773 - val_accuracy: 0.0037 - lr: 0.0010 - 304ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0210 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 0.0010 - 312ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00009: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0082 - accuracy: 0.0000e+00 - val_loss: 0.0732 - val_accuracy: 0.0037 - lr: 0.0010 - 307ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0355 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 302ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 302ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 289ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00534
43/43 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 312ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.00534 to 0.00493, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 341ms/epoch - 8ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.00493 to 0.00456, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 315ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.00456 to 0.00445, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 334ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.00445 to 0.00444, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 314ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00444
43/43 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 304ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00444
43/43 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 309ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.00444 to 0.00441, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 329ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.00441 to 0.00433, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 340ms/epoch - 8ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.00433 to 0.00424, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 323ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss improved from 0.00424 to 0.00413, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 316ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss improved from 0.00413 to 0.00401, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 319ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss improved from 0.00401 to 0.00390, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 330ms/epoch - 8ms/step
Epoch 26/500
Epoch 00026: val_loss improved from 0.00390 to 0.00379, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 320ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss improved from 0.00379 to 0.00369, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 311ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.00369 to 0.00361, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 321ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss improved from 0.00361 to 0.00355, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 335ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss improved from 0.00355 to 0.00351, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 310ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.00351 to 0.00349, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 331ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00349
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 293ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00349
43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 290ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00349
43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 290ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00035: val_loss did not improve from 0.00349
43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 289ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00349
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.6026e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.3658e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.2915e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00040: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.2580e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.2358e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.2171e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.1996e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.1824e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.1651e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.1475e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.1297e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.1115e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.0931e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.0743e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.0552e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.0358e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00349
43/43 - 0s - loss: 9.0161e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.9960e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.9756e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.9550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.9340e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.9127e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.8911e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.8693e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.8471e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.8247e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.8021e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 7ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.7792e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.7560e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.7326e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.7091e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.6853e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.6613e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.6371e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 7ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.6128e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.5883e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.5637e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.5389e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.5141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.4891e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.4640e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 7ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.4388e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.4136e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 7ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.3883e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.00349
43/43 - 0s - loss: 8.3630e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 00081: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 63.041023819643854
RMSE: 7.939837770360541
MAPE: 6.449589599500938
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 63.66877348603133
RMSE: 7.979271488427457
MAPE: 6.567170782771208
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 74.84193590201411
RMSE: 8.65112338959595
MAPE: 6.92726320779593
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 124.07774757087437
RMSE: 11.139019147612341
MAPE: 9.962964959911572
KAMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 64.92528911521055
RMSE: 8.057623043752454
MAPE: 6.682416615913553
MIDPOINT
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 68.19255604013144
RMSE: 8.25787842246006
MAPE: 6.72839330666561
T3
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 149.0300312328299
RMSE: 12.207785680983669
MAPE: 10.094975187792123
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.19 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.04 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.35 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.62 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.14 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.858 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 13:32:42 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.07399, saving model to LSTM2.h5
90/90 - 6s - loss: 0.1376 - accuracy: 0.0000e+00 - val_loss: 0.0740 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 64ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.07399 to 0.01692, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0319 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 0.0010 - 599ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01692
90/90 - 1s - loss: 0.0548 - accuracy: 0.0000e+00 - val_loss: 0.0262 - val_accuracy: 0.0037 - lr: 0.0010 - 567ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.01692 to 0.01084, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0325 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 0.0010 - 608ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.01084 to 0.01009, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0231 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 586ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01009 to 0.00743, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0143 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 0.0010 - 613ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00743
90/90 - 1s - loss: 0.0090 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 0.0010 - 572ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00743
90/90 - 1s - loss: 0.0083 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 0.0010 - 568ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00743
90/90 - 1s - loss: 0.0077 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 0.0010 - 568ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00743
90/90 - 1s - loss: 0.0090 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 0.0010 - 568ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00011: val_loss did not improve from 0.00743
90/90 - 1s - loss: 0.0095 - accuracy: 0.0000e+00 - val_loss: 0.0270 - val_accuracy: 0.0037 - lr: 0.0010 - 567ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.00743 to 0.00655, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0199 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 608ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 580ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 592ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 597ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 583ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00017: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 575ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 580ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 564ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 589ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 582ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00022: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 570ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00655
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 592ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.9328e-04 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 586ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.8587e-04 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 602ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.7856e-04 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.7137e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 576ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.6430e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 578ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.5736e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 572ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.5055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.4388e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.3736e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 588ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.3099e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.2477e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 585ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.1871e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 587ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.1281e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 579ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.0706e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 578ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00655
90/90 - 1s - loss: 9.0146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 580ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.9602e-04 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 594ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.9072e-04 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 571ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.8556e-04 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 587ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.8054e-04 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 581ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.7565e-04 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 589ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.7090e-04 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 587ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.6627e-04 - accuracy: 0.0000e+00 - val_loss: 0.0171 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 576ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.6176e-04 - accuracy: 0.0000e+00 - val_loss: 0.0173 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 571ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.5737e-04 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 578ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.5310e-04 - accuracy: 0.0000e+00 - val_loss: 0.0178 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 552ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.4894e-04 - accuracy: 0.0000e+00 - val_loss: 0.0180 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 579ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.4488e-04 - accuracy: 0.0000e+00 - val_loss: 0.0182 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 577ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.4094e-04 - accuracy: 0.0000e+00 - val_loss: 0.0185 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 589ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.3710e-04 - accuracy: 0.0000e+00 - val_loss: 0.0187 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.3336e-04 - accuracy: 0.0000e+00 - val_loss: 0.0189 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.2972e-04 - accuracy: 0.0000e+00 - val_loss: 0.0191 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.2619e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 563ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.2275e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 593ms/epoch - 7ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.1941e-04 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 570ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.1617e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 608ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.1303e-04 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 578ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.0998e-04 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 592ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.0702e-04 - accuracy: 0.0000e+00 - val_loss: 0.0203 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 569ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00655
90/90 - 1s - loss: 8.0415e-04 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 561ms/epoch - 6ms/step
Epoch 00062: early stopping
SMA Prediction vs Close: 53.73% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 63.041023819643854 RMSE: 7.939837770360541 MAPE: 6.449589599500938 EMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 44.78% Accuracy MSE: 63.66877348603133 RMSE: 7.979271488427457 MAPE: 6.567170782771208 WMA Prediction vs Close: 54.48% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 74.84193590201411 RMSE: 8.65112338959595 MAPE: 6.92726320779593 DEMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 124.07774757087437 RMSE: 11.139019147612341 MAPE: 9.962964959911572 KAMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 64.92528911521055 RMSE: 8.057623043752454 MAPE: 6.682416615913553 MIDPOINT Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 45.52% Accuracy MSE: 68.19255604013144 RMSE: 8.25787842246006 MAPE: 6.72839330666561 T3 Prediction vs Close: 53.73% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 149.0300312328299 RMSE: 12.207785680983669 MAPE: 10.094975187792123 TEMA Prediction vs Close: 50.37% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 71.80641753112648 RMSE: 8.473866740227066 MAPE: 7.512371017185029 Runtime: mins: 12.728734250049998
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment2.png to Experiment2 (1).png
img = cv2.imread('Experiment2.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fb3f66e72d0>
with open('simulation2_data.json') as json_file:
simulation2 = json.load(json_file)
fileimg = 'Experiment2'
for i in range(len(list(simulation2.keys()))):
SIM = list(simulation2.keys())[i]
plot_train(simulation2,SIM)
plot_test(simulation2,SIM)
----- Train RMSE for SMA ----- 8.825487281603085 ----- Train_MSE_LSTM for SMA ----- 77.88922575773782 ----- Train MAE LSTM for SMA ----- 7.678486704575587
----- Test RMSE for SMA----- 7.939837770360541 ----- Test_MSE_LSTM for SMA----- 63.041023819643854 ----- Test_MAE_LSTM for SMA----- 6.449589599500938
----- Train RMSE for EMA ----- 10.17885992167541 ----- Train_MSE_LSTM for EMA ----- 103.60918930508994 ----- Train MAE LSTM for EMA ----- 9.004301877044565
----- Test RMSE for EMA----- 7.979271488427457 ----- Test_MSE_LSTM for EMA----- 63.66877348603133 ----- Test_MAE_LSTM for EMA----- 6.567170782771208
----- Train RMSE for WMA ----- 10.465802137903177 ----- Train_MSE_LSTM for WMA ----- 109.5330143897387 ----- Train MAE LSTM for WMA ----- 9.31366489027537
----- Test RMSE for WMA----- 8.65112338959595 ----- Test_MSE_LSTM for WMA----- 74.84193590201411 ----- Test_MAE_LSTM for WMA----- 6.92726320779593
----- Train RMSE for DEMA ----- 12.116566242192249 ----- Train_MSE_LSTM for DEMA ----- 146.8111775014328 ----- Train MAE LSTM for DEMA ----- 10.867763409922118
----- Test RMSE for DEMA----- 11.139019147612341 ----- Test_MSE_LSTM for DEMA----- 124.07774757087437 ----- Test_MAE_LSTM for DEMA----- 9.962964959911572
----- Train RMSE for KAMA ----- 10.526385532586437 ----- Train_MSE_LSTM for KAMA ----- 110.80479238064504 ----- Train MAE LSTM for KAMA ----- 9.464160976428907
----- Test RMSE for KAMA----- 8.057623043752454 ----- Test_MSE_LSTM for KAMA----- 64.92528911521055 ----- Test_MAE_LSTM for KAMA----- 6.682416615913553
----- Train RMSE for MIDPOINT ----- 9.44780598788214 ----- Train_MSE_LSTM for MIDPOINT ----- 89.26103798466161 ----- Train MAE LSTM for MIDPOINT ----- 8.392066648593033
----- Test RMSE for MIDPOINT----- 8.25787842246006 ----- Test_MSE_LSTM for MIDPOINT----- 68.19255604013144 ----- Test_MAE_LSTM for MIDPOINT----- 6.72839330666561
----- Train RMSE for T3 ----- 12.031116618105388 ----- Train_MSE_LSTM for T3 ----- 144.74776707845163 ----- Train MAE LSTM for T3 ----- 10.821733970313776
----- Test RMSE for T3----- 12.207785680983669 ----- Test_MSE_LSTM for T3----- 149.0300312328299 ----- Test_MAE_LSTM for T3----- 10.094975187792123
----- Train RMSE for TEMA ----- 7.432502353073835 ----- Train_MSE_LSTM for TEMA ----- 55.242091228448096 ----- Train MAE LSTM for TEMA ----- 5.1691971761128395
----- Test RMSE for TEMA----- 8.473866740227066 ----- Test_MSE_LSTM for TEMA----- 71.80641753112648 ----- Test_MAE_LSTM for TEMA----- 7.512371017185029
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# # Option 1
# # Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
#
class Double_Tanh(Activation):
def __init__(self, activation, **kwargs):
super(Double_Tanh, self).__init__(activation, **kwargs)
self.__name__ = 'double_tanh'
def double_tanh(x):
return (K.tanh(x) * 2)
get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# Model Generation
model = Sequential()
#check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
model.add(Dense(1))
model.add(Activation(double_tanh))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM3.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation3 = {}
imgfile = 'Experiment3'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation3[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation3_data.json', 'w') as fp:
json.dump(simulation3, fp)
for ma in simulation3.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation3[ma]['final']['mse'],
'\nRMSE:\t', simulation3[ma]['final']['rmse'],
'\nMAPE:\t', simulation3[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.39 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.13 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.06 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.53 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.53 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.14 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 1.919 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 13:38:00 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04037, saving model to LSTM3.h5
48/48 - 3s - loss: 0.1177 - mse: 0.1177 - mae: 0.2555 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1495 - lr: 0.0010 - 3s/epoch - 60ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.04037 to 0.02521, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0329 - mse: 0.0329 - mae: 0.1399 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1348 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.02521 to 0.02239, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0945 - val_loss: 0.0224 - val_mse: 0.0224 - val_mae: 0.1237 - lr: 0.0010 - 278ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.02239 to 0.02012, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0916 - val_loss: 0.0201 - val_mse: 0.0201 - val_mae: 0.1179 - lr: 0.0010 - 276ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.02012 to 0.01903, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0801 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1147 - lr: 0.0010 - 269ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01903 to 0.01707, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0832 - val_loss: 0.0171 - val_mse: 0.0171 - val_mae: 0.1061 - lr: 0.0010 - 268ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.01707 to 0.01650, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0797 - val_loss: 0.0165 - val_mse: 0.0165 - val_mae: 0.1035 - lr: 0.0010 - 273ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.01650
48/48 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0778 - val_loss: 0.0171 - val_mse: 0.0171 - val_mae: 0.1053 - lr: 0.0010 - 254ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.01650 to 0.01619, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0779 - val_loss: 0.0162 - val_mse: 0.0162 - val_mae: 0.1009 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.01619 to 0.01583, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0816 - val_loss: 0.0158 - val_mse: 0.0158 - val_mae: 0.0985 - lr: 0.0010 - 270ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0806 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.1000 - lr: 0.0010 - 266ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0904 - val_loss: 0.0185 - val_mse: 0.0185 - val_mae: 0.1076 - lr: 0.0010 - 252ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0870 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.0982 - lr: 0.0010 - 254ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0966 - val_loss: 0.0162 - val_mse: 0.0162 - val_mae: 0.0984 - lr: 0.0010 - 258ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00015: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0174 - mse: 0.0174 - mae: 0.1047 - val_loss: 0.0170 - val_mse: 0.0170 - val_mae: 0.1009 - lr: 0.0010 - 253ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0336 - mse: 0.0336 - mae: 0.1509 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1273 - lr: 1.0000e-04 - 249ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0881 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1368 - lr: 1.0000e-04 - 249ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0760 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1351 - lr: 1.0000e-04 - 260ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0726 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1324 - lr: 1.0000e-04 - 260ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00020: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0727 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1315 - lr: 1.0000e-04 - 253ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0651 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1316 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0659 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1318 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0659 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1318 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0678 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1320 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00025: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0653 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1317 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0658 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1316 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0656 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1315 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0660 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1316 - lr: 1.0000e-05 - 256ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0676 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1313 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0653 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1312 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0680 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1311 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0654 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1308 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0659 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1307 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0648 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1304 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0634 - val_loss: 0.0263 - val_mse: 0.0263 - val_mae: 0.1300 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0652 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1295 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0631 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1291 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0634 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1290 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0644 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1287 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0611 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1283 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0652 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1285 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0638 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1281 - lr: 1.0000e-05 - 248ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0622 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1276 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0651 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1271 - lr: 1.0000e-05 - 260ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0641 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1270 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0640 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1265 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0621 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1264 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0643 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1261 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0627 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1258 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0615 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1257 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0633 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1253 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1250 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0615 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1240 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0635 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1237 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0613 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1234 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0611 - val_loss: 0.0240 - val_mse: 0.0240 - val_mae: 0.1231 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0615 - val_loss: 0.0240 - val_mse: 0.0240 - val_mae: 0.1230 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.0238 - val_mse: 0.0238 - val_mae: 0.1226 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0629 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1221 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01583
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0610 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1216 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.21 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.59 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.45 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.15 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 1.966 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 13:39:39 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.01660, saving model to LSTM3.h5
16/16 - 3s - loss: 0.0665 - mse: 0.0665 - mae: 0.2195 - val_loss: 0.0166 - val_mse: 0.0166 - val_mae: 0.1016 - lr: 0.0010 - 3s/epoch - 184ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.01660 to 0.01641, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0214 - mse: 0.0214 - mae: 0.1140 - val_loss: 0.0164 - val_mse: 0.0164 - val_mae: 0.0972 - lr: 0.0010 - 121ms/epoch - 8ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01641
16/16 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0864 - val_loss: 0.0188 - val_mse: 0.0188 - val_mae: 0.1073 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01641
16/16 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0848 - val_loss: 0.0198 - val_mse: 0.0198 - val_mae: 0.1103 - lr: 0.0010 - 97ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.01641 to 0.01552, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0792 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0939 - lr: 0.0010 - 123ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01552 to 0.01419, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0758 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0887 - lr: 0.0010 - 123ms/epoch - 8ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.01419
16/16 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0730 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0894 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.01419 to 0.01396, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0680 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0879 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.01396 to 0.01391, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0691 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0877 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.01391 to 0.01358, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0689 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0869 - lr: 0.0010 - 121ms/epoch - 8ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.01358 to 0.01304, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0636 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0859 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.01304 to 0.01294, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0644 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0852 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.01294 to 0.01271, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0643 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0849 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0873 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0578 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0942 - lr: 0.0010 - 110ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.0132 - val_mse: 0.0132 - val_mae: 0.0907 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0580 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0959 - lr: 0.0010 - 104ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00018: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0650 - val_loss: 0.0141 - val_mse: 0.0141 - val_mae: 0.0969 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0544 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0959 - lr: 1.0000e-04 - 102ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0513 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0951 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0959 - lr: 1.0000e-04 - 101ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0974 - lr: 1.0000e-04 - 98ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00023: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0146 - val_mse: 0.0146 - val_mae: 0.0986 - lr: 1.0000e-04 - 101ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0511 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0986 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0521 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0985 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0508 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0985 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0518 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0985 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00028: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0983 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0519 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0982 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0502 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0513 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0981 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0480 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.0982 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0487 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0981 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0979 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0979 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0495 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0980 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0981 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0495 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0979 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0979 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0522 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0979 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0979 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0978 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0517 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0977 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0975 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0975 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0975 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0510 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0974 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0973 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0510 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0972 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0973 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0973 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0518 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0972 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0525 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0972 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0972 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0974 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0974 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0973 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.01271
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0142 - val_mse: 0.0142 - val_mae: 0.0971 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 63.919262026708296
RMSE: 7.994952284204596
MAPE: 6.479287961204322
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.19 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.11 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.57 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.40 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.24 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.976 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 13:41:09 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.37726, saving model to LSTM3.h5
17/17 - 3s - loss: 0.1493 - mse: 0.1493 - mae: 0.2969 - val_loss: 0.3773 - val_mse: 0.3773 - val_mae: 0.5672 - lr: 0.0010 - 3s/epoch - 195ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.37726
17/17 - 0s - loss: 0.0305 - mse: 0.0305 - mae: 0.1451 - val_loss: 0.3820 - val_mse: 0.3820 - val_mae: 0.5742 - lr: 0.0010 - 109ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.37726 to 0.27295, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0286 - mse: 0.0286 - mae: 0.1379 - val_loss: 0.2729 - val_mse: 0.2729 - val_mae: 0.4756 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.27295 to 0.25856, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.1020 - val_loss: 0.2586 - val_mse: 0.2586 - val_mae: 0.4621 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.25856 to 0.23338, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0954 - val_loss: 0.2334 - val_mse: 0.2334 - val_mae: 0.4355 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.23338 to 0.23136, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0873 - val_loss: 0.2314 - val_mse: 0.2314 - val_mae: 0.4337 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.23136 to 0.21178, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0899 - val_loss: 0.2118 - val_mse: 0.2118 - val_mae: 0.4123 - lr: 0.0010 - 129ms/epoch - 8ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.21178 to 0.20067, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0855 - val_loss: 0.2007 - val_mse: 0.2007 - val_mae: 0.4003 - lr: 0.0010 - 129ms/epoch - 8ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.20067 to 0.19406, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0828 - val_loss: 0.1941 - val_mse: 0.1941 - val_mae: 0.3929 - lr: 0.0010 - 130ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.19406
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0747 - val_loss: 0.1976 - val_mse: 0.1976 - val_mae: 0.3976 - lr: 0.0010 - 105ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.19406 to 0.17274, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0751 - val_loss: 0.1727 - val_mse: 0.1727 - val_mae: 0.3674 - lr: 0.0010 - 121ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.17274
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0698 - val_loss: 0.1760 - val_mse: 0.1760 - val_mae: 0.3719 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.17274 to 0.17002, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0661 - val_loss: 0.1700 - val_mse: 0.1700 - val_mae: 0.3648 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.17002 to 0.15280, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0697 - val_loss: 0.1528 - val_mse: 0.1528 - val_mae: 0.3426 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.15280 to 0.13889, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0683 - val_loss: 0.1389 - val_mse: 0.1389 - val_mae: 0.3234 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.13889
17/17 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0640 - val_loss: 0.1452 - val_mse: 0.1452 - val_mae: 0.3326 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.13889
17/17 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0657 - val_loss: 0.1470 - val_mse: 0.1470 - val_mae: 0.3351 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.13889
17/17 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0587 - val_loss: 0.1411 - val_mse: 0.1411 - val_mae: 0.3272 - lr: 0.0010 - 107ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.13889 to 0.12895, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0611 - val_loss: 0.1290 - val_mse: 0.1290 - val_mae: 0.3102 - lr: 0.0010 - 122ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.12895 to 0.11477, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0606 - val_loss: 0.1148 - val_mse: 0.1148 - val_mae: 0.2885 - lr: 0.0010 - 121ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.11477
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0603 - val_loss: 0.1149 - val_mse: 0.1149 - val_mae: 0.2886 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.11477
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.1246 - val_mse: 0.1246 - val_mae: 0.3042 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss improved from 0.11477 to 0.11461, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.1146 - val_mse: 0.1146 - val_mae: 0.2897 - lr: 0.0010 - 126ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss improved from 0.11461 to 0.10943, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1094 - val_mse: 0.1094 - val_mae: 0.2816 - lr: 0.0010 - 137ms/epoch - 8ms/step
Epoch 25/500
Epoch 00025: val_loss improved from 0.10943 to 0.10647, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0571 - val_loss: 0.1065 - val_mse: 0.1065 - val_mae: 0.2772 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 26/500
Epoch 00026: val_loss improved from 0.10647 to 0.10282, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0567 - val_loss: 0.1028 - val_mse: 0.1028 - val_mae: 0.2715 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.10282
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.1031 - val_mse: 0.1031 - val_mae: 0.2718 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.10282
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.2768 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss improved from 0.10282 to 0.10251, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.1025 - val_mse: 0.1025 - val_mae: 0.2717 - lr: 0.0010 - 122ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss improved from 0.10251 to 0.10194, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0549 - val_loss: 0.1019 - val_mse: 0.1019 - val_mae: 0.2717 - lr: 0.0010 - 125ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.10194 to 0.09181, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.0918 - val_mse: 0.0918 - val_mae: 0.2546 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss improved from 0.09181 to 0.08485, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0549 - val_loss: 0.0849 - val_mse: 0.0849 - val_mae: 0.2426 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.08485
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0945 - val_mse: 0.0945 - val_mae: 0.2598 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss improved from 0.08485 to 0.08199, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.0820 - val_mse: 0.0820 - val_mae: 0.2378 - lr: 0.0010 - 125ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0532 - val_loss: 0.0899 - val_mse: 0.0899 - val_mae: 0.2523 - lr: 0.0010 - 110ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0881 - val_mse: 0.0881 - val_mae: 0.2499 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0575 - val_loss: 0.0868 - val_mse: 0.0868 - val_mae: 0.2486 - lr: 0.0010 - 112ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0549 - val_loss: 0.0868 - val_mse: 0.0868 - val_mae: 0.2492 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00039: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0516 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2535 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0862 - val_mse: 0.0862 - val_mae: 0.2481 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0468 - val_loss: 0.0852 - val_mse: 0.0852 - val_mae: 0.2464 - lr: 1.0000e-04 - 113ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0463 - val_loss: 0.0848 - val_mse: 0.0848 - val_mae: 0.2457 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0479 - val_loss: 0.0835 - val_mse: 0.0835 - val_mae: 0.2436 - lr: 1.0000e-04 - 110ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00044: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0482 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2423 - lr: 1.0000e-04 - 104ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0475 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2422 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0470 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2421 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0462 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2422 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0464 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2422 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00049: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0457 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2423 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2424 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0474 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2425 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0475 - val_loss: 0.0829 - val_mse: 0.0829 - val_mae: 0.2426 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0488 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2424 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0481 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2424 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0476 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2424 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0466 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2425 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0455 - val_loss: 0.0829 - val_mse: 0.0829 - val_mae: 0.2426 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2425 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0482 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2422 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0826 - val_mse: 0.0826 - val_mae: 0.2421 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0478 - val_loss: 0.0825 - val_mse: 0.0825 - val_mae: 0.2420 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0480 - val_loss: 0.0826 - val_mse: 0.0826 - val_mae: 0.2422 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0484 - val_loss: 0.0826 - val_mse: 0.0826 - val_mae: 0.2422 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0478 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2423 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0474 - val_loss: 0.0826 - val_mse: 0.0826 - val_mae: 0.2423 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.0826 - val_mse: 0.0826 - val_mae: 0.2422 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2423 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0827 - val_mse: 0.0827 - val_mae: 0.2423 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0449 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2426 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2427 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0459 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2425 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0479 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2426 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0471 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2426 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0469 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2426 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0472 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2427 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2427 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0474 - val_loss: 0.0829 - val_mse: 0.0829 - val_mae: 0.2429 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0459 - val_loss: 0.0830 - val_mse: 0.0830 - val_mae: 0.2430 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0830 - val_mse: 0.0830 - val_mae: 0.2431 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.0830 - val_mse: 0.0830 - val_mae: 0.2431 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0468 - val_loss: 0.0831 - val_mse: 0.0831 - val_mae: 0.2433 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0452 - val_loss: 0.0831 - val_mse: 0.0831 - val_mae: 0.2433 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0830 - val_mse: 0.0830 - val_mae: 0.2431 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.08199
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0453 - val_loss: 0.0831 - val_mse: 0.0831 - val_mae: 0.2433 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 00084: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 63.919262026708296
RMSE: 7.994952284204596
MAPE: 6.479287961204322
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 24.651058301828286
RMSE: 4.9649832126431495
MAPE: 3.9308905500983484
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.68 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.68 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.16 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.335 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 13:42:31 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.08166, saving model to LSTM3.h5
10/10 - 3s - loss: 0.2185 - mse: 0.2185 - mae: 0.3699 - val_loss: 0.0817 - val_mse: 0.0817 - val_mae: 0.2556 - lr: 0.0010 - 3s/epoch - 281ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.08166
10/10 - 0s - loss: 0.0741 - mse: 0.0741 - mae: 0.2322 - val_loss: 0.0922 - val_mse: 0.0922 - val_mae: 0.2767 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.08166 to 0.07115, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0418 - mse: 0.0418 - mae: 0.1705 - val_loss: 0.0711 - val_mse: 0.0711 - val_mae: 0.2367 - lr: 0.0010 - 92ms/epoch - 9ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.07115 to 0.06541, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0292 - mse: 0.0292 - mae: 0.1339 - val_loss: 0.0654 - val_mse: 0.0654 - val_mae: 0.2254 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.06541
10/10 - 0s - loss: 0.0304 - mse: 0.0304 - mae: 0.1373 - val_loss: 0.0696 - val_mse: 0.0696 - val_mae: 0.2359 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.06541
10/10 - 0s - loss: 0.0228 - mse: 0.0228 - mae: 0.1219 - val_loss: 0.0655 - val_mse: 0.0655 - val_mae: 0.2283 - lr: 0.0010 - 75ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.06541 to 0.05387, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0188 - mse: 0.0188 - mae: 0.1095 - val_loss: 0.0539 - val_mse: 0.0539 - val_mae: 0.2021 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.05387 to 0.04709, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0178 - mse: 0.0178 - mae: 0.1049 - val_loss: 0.0471 - val_mse: 0.0471 - val_mae: 0.1858 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.04709 to 0.04173, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0173 - mse: 0.0173 - mae: 0.1053 - val_loss: 0.0417 - val_mse: 0.0417 - val_mae: 0.1722 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.04173 to 0.03636, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0139 - mse: 0.0139 - mae: 0.0929 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1578 - lr: 0.0010 - 99ms/epoch - 10ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.03636 to 0.03305, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0994 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1487 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.03305 to 0.02940, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0905 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1390 - lr: 0.0010 - 100ms/epoch - 10ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.02940 to 0.02352, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0844 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1213 - lr: 0.0010 - 96ms/epoch - 10ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.02352 to 0.02073, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0817 - val_loss: 0.0207 - val_mse: 0.0207 - val_mae: 0.1118 - lr: 0.0010 - 89ms/epoch - 9ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.02073 to 0.01901, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0771 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1056 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.01901 to 0.01814, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0754 - val_loss: 0.0181 - val_mse: 0.0181 - val_mae: 0.1021 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.01814 to 0.01752, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0764 - val_loss: 0.0175 - val_mse: 0.0175 - val_mae: 0.1000 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.01752 to 0.01714, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0758 - val_loss: 0.0171 - val_mse: 0.0171 - val_mae: 0.0988 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01714
10/10 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0676 - val_loss: 0.0182 - val_mse: 0.0182 - val_mae: 0.1040 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.01714 to 0.01697, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0696 - val_loss: 0.0170 - val_mse: 0.0170 - val_mae: 0.0994 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0660 - val_loss: 0.0170 - val_mse: 0.0170 - val_mae: 0.1001 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0707 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1104 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0650 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1251 - lr: 0.0010 - 75ms/epoch - 8ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0604 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1279 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00025: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0631 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1291 - lr: 0.0010 - 70ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1293 - lr: 1.0000e-04 - 76ms/epoch - 8ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1299 - lr: 1.0000e-04 - 72ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0641 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1300 - lr: 1.0000e-04 - 75ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0611 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1316 - lr: 1.0000e-04 - 75ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00030: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1329 - lr: 1.0000e-04 - 83ms/epoch - 8ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1330 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0612 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1329 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0592 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1329 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0594 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1329 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 35/500
Epoch 00035: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00035: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0636 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1330 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0606 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1330 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1329 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0623 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1329 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0579 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1328 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0641 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1327 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0571 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1328 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0585 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1328 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1330 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0571 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1331 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0591 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1329 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0640 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1328 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1327 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0623 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1327 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0599 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1326 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1325 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0596 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1324 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1324 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1325 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0600 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1325 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0612 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1325 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1325 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1325 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1327 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0608 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1328 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1328 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0598 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1330 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0635 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1331 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1332 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0616 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1333 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0619 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1333 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0616 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1331 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1331 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0607 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1334 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0578 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1336 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.01697
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0595 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1337 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 00070: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 63.919262026708296
RMSE: 7.994952284204596
MAPE: 6.479287961204322
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 24.651058301828286
RMSE: 4.9649832126431495
MAPE: 3.9308905500983484
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 156.8635759091866
RMSE: 12.524518989134338
MAPE: 11.387412907589542
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.31 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.22 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.12 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.44 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.56 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.943 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 13:43:43 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.08578, saving model to LSTM3.h5
45/45 - 3s - loss: 0.1392 - mse: 0.1392 - mae: 0.2841 - val_loss: 0.0858 - val_mse: 0.0858 - val_mae: 0.2303 - lr: 0.0010 - 3s/epoch - 77ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.08578 to 0.06798, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0336 - mse: 0.0336 - mae: 0.1463 - val_loss: 0.0680 - val_mse: 0.0680 - val_mae: 0.2133 - lr: 0.0010 - 278ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.06798 to 0.06203, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0172 - mse: 0.0172 - mae: 0.1045 - val_loss: 0.0620 - val_mse: 0.0620 - val_mae: 0.2046 - lr: 0.0010 - 254ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.06203 to 0.05766, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0936 - val_loss: 0.0577 - val_mse: 0.0577 - val_mae: 0.1954 - lr: 0.0010 - 265ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.05766 to 0.05452, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0851 - val_loss: 0.0545 - val_mse: 0.0545 - val_mae: 0.1874 - lr: 0.0010 - 252ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.05452 to 0.05091, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0771 - val_loss: 0.0509 - val_mse: 0.0509 - val_mae: 0.1784 - lr: 0.0010 - 251ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.05091 to 0.05062, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0721 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.1721 - lr: 0.0010 - 266ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.05062 to 0.04676, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0703 - val_loss: 0.0468 - val_mse: 0.0468 - val_mae: 0.1681 - lr: 0.0010 - 262ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04676
45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0672 - val_loss: 0.0489 - val_mse: 0.0489 - val_mae: 0.1596 - lr: 0.0010 - 244ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04676
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0705 - val_loss: 0.0477 - val_mse: 0.0477 - val_mae: 0.1563 - lr: 0.0010 - 258ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.04676 to 0.04290, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0714 - val_loss: 0.0429 - val_mse: 0.0429 - val_mae: 0.1535 - lr: 0.0010 - 263ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04290
45/45 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0695 - val_loss: 0.0440 - val_mse: 0.0440 - val_mae: 0.1459 - lr: 0.0010 - 245ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.04290 to 0.04108, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0721 - val_loss: 0.0411 - val_mse: 0.0411 - val_mae: 0.1455 - lr: 0.0010 - 257ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.04108 to 0.04037, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0758 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1409 - lr: 0.0010 - 263ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04037
45/45 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0750 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1418 - lr: 0.0010 - 256ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.04037
45/45 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0821 - val_loss: 0.0405 - val_mse: 0.0405 - val_mae: 0.1387 - lr: 0.0010 - 244ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04037
45/45 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0799 - val_loss: 0.0463 - val_mse: 0.0463 - val_mae: 0.1390 - lr: 0.0010 - 233ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04037
45/45 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.0954 - val_loss: 0.0453 - val_mse: 0.0453 - val_mae: 0.1371 - lr: 0.0010 - 239ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00019: val_loss did not improve from 0.04037
45/45 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0925 - val_loss: 0.0424 - val_mse: 0.0424 - val_mae: 0.1361 - lr: 0.0010 - 254ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.04037 to 0.03541, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0406 - mse: 0.0406 - mae: 0.1683 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1453 - lr: 1.0000e-04 - 261ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.03541 to 0.03531, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0925 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1488 - lr: 1.0000e-04 - 252ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0822 - val_loss: 0.0356 - val_mse: 0.0356 - val_mae: 0.1487 - lr: 1.0000e-04 - 240ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0801 - val_loss: 0.0360 - val_mse: 0.0360 - val_mae: 0.1500 - lr: 1.0000e-04 - 249ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0790 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1500 - lr: 1.0000e-04 - 236ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00025: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0764 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1507 - lr: 1.0000e-04 - 238ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0708 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1507 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0709 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1506 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0718 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1506 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0697 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1505 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00030: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0697 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1505 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0692 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1506 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0711 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1505 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0695 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1505 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0685 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1504 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0710 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1504 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0680 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1504 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0678 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1504 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0687 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1503 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0693 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1503 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0717 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1502 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0682 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1502 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0692 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1502 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0705 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1500 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0687 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1498 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0687 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1498 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0701 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1498 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0691 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1498 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0706 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1497 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0691 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1497 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0682 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1496 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0655 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1495 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0656 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1496 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0682 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1495 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0658 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1494 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0684 - val_loss: 0.0385 - val_mse: 0.0385 - val_mae: 0.1495 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0666 - val_loss: 0.0386 - val_mse: 0.0386 - val_mae: 0.1493 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0674 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1493 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0676 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1492 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0662 - val_loss: 0.0389 - val_mse: 0.0389 - val_mae: 0.1491 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0672 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1492 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0658 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1492 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0653 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1492 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0642 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1492 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0645 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1493 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1492 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0660 - val_loss: 0.0396 - val_mse: 0.0396 - val_mae: 0.1492 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0642 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1491 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0640 - val_loss: 0.0398 - val_mse: 0.0398 - val_mae: 0.1491 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0629 - val_loss: 0.0400 - val_mse: 0.0400 - val_mae: 0.1491 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0667 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1492 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.03531
45/45 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0610 - val_loss: 0.0402 - val_mse: 0.0402 - val_mae: 0.1492 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 00071: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 63.919262026708296
RMSE: 7.994952284204596
MAPE: 6.479287961204322
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 24.651058301828286
RMSE: 4.9649832126431495
MAPE: 3.9308905500983484
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 156.8635759091866
RMSE: 12.524518989134338
MAPE: 11.387412907589542
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 59.19746610115158
RMSE: 7.69398895899595
MAPE: 6.776737847872761
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.32 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.07 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.90 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.61 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.15 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.352 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 13:45:17 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.36509, saving model to LSTM3.h5
58/58 - 3s - loss: 0.1032 - mse: 0.1032 - mae: 0.2314 - val_loss: 0.3651 - val_mse: 0.3651 - val_mae: 0.5743 - lr: 0.0010 - 3s/epoch - 52ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.36509
58/58 - 0s - loss: 0.0213 - mse: 0.0213 - mae: 0.1163 - val_loss: 0.3714 - val_mse: 0.3714 - val_mae: 0.5833 - lr: 0.0010 - 301ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.36509 to 0.26686, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1008 - val_loss: 0.2669 - val_mse: 0.2669 - val_mae: 0.4911 - lr: 0.0010 - 327ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.26686
58/58 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0836 - val_loss: 0.2773 - val_mse: 0.2773 - val_mae: 0.5038 - lr: 0.0010 - 304ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.26686 to 0.18693, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0817 - val_loss: 0.1869 - val_mse: 0.1869 - val_mae: 0.4054 - lr: 0.0010 - 329ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.18693
58/58 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0756 - val_loss: 0.2327 - val_mse: 0.2327 - val_mae: 0.4609 - lr: 0.0010 - 297ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.18693 to 0.13637, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0756 - val_loss: 0.1364 - val_mse: 0.1364 - val_mae: 0.3435 - lr: 0.0010 - 339ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.13637
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0660 - val_loss: 0.1913 - val_mse: 0.1913 - val_mae: 0.4148 - lr: 0.0010 - 300ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.13637 to 0.08219, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0730 - val_loss: 0.0822 - val_mse: 0.0822 - val_mae: 0.2581 - lr: 0.0010 - 318ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.08219
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0647 - val_loss: 0.1769 - val_mse: 0.1769 - val_mae: 0.3982 - lr: 0.0010 - 308ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.08219 to 0.05422, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0801 - val_loss: 0.0542 - val_mse: 0.0542 - val_mae: 0.2005 - lr: 0.0010 - 316ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05422
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0609 - val_loss: 0.1559 - val_mse: 0.1559 - val_mae: 0.3738 - lr: 0.0010 - 283ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.05422 to 0.03917, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0783 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1653 - lr: 0.0010 - 324ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.03917
58/58 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0625 - val_loss: 0.1523 - val_mse: 0.1523 - val_mae: 0.3712 - lr: 0.0010 - 306ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.03917 to 0.01711, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0872 - val_loss: 0.0171 - val_mse: 0.0171 - val_mae: 0.0982 - lr: 0.0010 - 308ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01711
58/58 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0649 - val_loss: 0.1457 - val_mse: 0.1457 - val_mae: 0.3636 - lr: 0.0010 - 311ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.01711 to 0.01155, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0910 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0818 - lr: 0.0010 - 321ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0658 - val_loss: 0.0945 - val_mse: 0.0945 - val_mae: 0.2906 - lr: 0.0010 - 303ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0880 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.0962 - lr: 0.0010 - 311ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0654 - val_loss: 0.0877 - val_mse: 0.0877 - val_mae: 0.2806 - lr: 0.0010 - 298ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0836 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1198 - lr: 0.0010 - 308ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00022: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0698 - val_loss: 0.0542 - val_mse: 0.0542 - val_mae: 0.2153 - lr: 0.0010 - 310ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0289 - mse: 0.0289 - mae: 0.1435 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1435 - lr: 1.0000e-04 - 306ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0837 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1266 - lr: 1.0000e-04 - 299ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0735 - val_loss: 0.0201 - val_mse: 0.0201 - val_mae: 0.1178 - lr: 1.0000e-04 - 299ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0704 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1127 - lr: 1.0000e-04 - 297ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00027: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0707 - val_loss: 0.0192 - val_mse: 0.0192 - val_mae: 0.1131 - lr: 1.0000e-04 - 293ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0671 - val_loss: 0.0192 - val_mse: 0.0192 - val_mae: 0.1129 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0664 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1125 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0654 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1120 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0664 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1112 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00032: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0647 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1111 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0622 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1109 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0616 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1109 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0661 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1109 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0634 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1108 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0629 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1109 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0630 - val_loss: 0.0188 - val_mse: 0.0188 - val_mae: 0.1110 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0648 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1113 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0641 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1116 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0661 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1113 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0660 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1112 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0638 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1116 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0597 - val_loss: 0.0191 - val_mse: 0.0191 - val_mae: 0.1118 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0644 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1115 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0597 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1115 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0607 - val_loss: 0.0191 - val_mse: 0.0191 - val_mae: 0.1115 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0617 - val_loss: 0.0192 - val_mse: 0.0192 - val_mae: 0.1120 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0627 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1126 - lr: 1.0000e-05 - 319ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0607 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1126 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0603 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1126 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0599 - val_loss: 0.0195 - val_mse: 0.0195 - val_mae: 0.1129 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0583 - val_loss: 0.0197 - val_mse: 0.0197 - val_mae: 0.1136 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0607 - val_loss: 0.0197 - val_mse: 0.0197 - val_mae: 0.1135 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0591 - val_loss: 0.0198 - val_mse: 0.0198 - val_mae: 0.1137 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0572 - val_loss: 0.0199 - val_mse: 0.0199 - val_mae: 0.1141 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0565 - val_loss: 0.0200 - val_mse: 0.0200 - val_mae: 0.1145 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0613 - val_loss: 0.0201 - val_mse: 0.0201 - val_mae: 0.1146 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0621 - val_loss: 0.0202 - val_mse: 0.0202 - val_mae: 0.1149 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0582 - val_loss: 0.0203 - val_mse: 0.0203 - val_mae: 0.1155 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0597 - val_loss: 0.0205 - val_mse: 0.0205 - val_mae: 0.1161 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.0206 - val_mse: 0.0206 - val_mae: 0.1162 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0579 - val_loss: 0.0209 - val_mse: 0.0209 - val_mae: 0.1171 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0594 - val_loss: 0.0211 - val_mse: 0.0211 - val_mae: 0.1179 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0590 - val_loss: 0.0212 - val_mse: 0.0212 - val_mae: 0.1182 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1183 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.01155
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0574 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1191 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 00067: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 63.919262026708296
RMSE: 7.994952284204596
MAPE: 6.479287961204322
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 24.651058301828286
RMSE: 4.9649832126431495
MAPE: 3.9308905500983484
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 156.8635759091866
RMSE: 12.524518989134338
MAPE: 11.387412907589542
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 59.19746610115158
RMSE: 7.69398895899595
MAPE: 6.776737847872761
MIDPOINT
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 46.490023595118274
RMSE: 6.818359303756166
MAPE: 5.538801606657957
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.32 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.25 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.06 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.02 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.41 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.14 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.316 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 13:47:01 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.78033, saving model to LSTM3.h5
43/43 - 3s - loss: 0.1572 - mse: 0.1572 - mae: 0.3172 - val_loss: 0.7803 - val_mse: 0.7803 - val_mae: 0.8584 - lr: 0.0010 - 3s/epoch - 80ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.78033 to 0.31463, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0652 - mse: 0.0652 - mae: 0.2098 - val_loss: 0.3146 - val_mse: 0.3146 - val_mae: 0.5364 - lr: 0.0010 - 265ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.31463 to 0.17210, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0324 - mse: 0.0324 - mae: 0.1441 - val_loss: 0.1721 - val_mse: 0.1721 - val_mae: 0.3885 - lr: 0.0010 - 251ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.17210 to 0.13935, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0208 - mse: 0.0208 - mae: 0.1148 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3462 - lr: 0.0010 - 249ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.13935 to 0.12467, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0178 - mse: 0.0178 - mae: 0.1064 - val_loss: 0.1247 - val_mse: 0.1247 - val_mae: 0.3259 - lr: 0.0010 - 245ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.12467 to 0.12418, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0159 - mse: 0.0159 - mae: 0.1017 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3253 - lr: 0.0010 - 261ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.12418 to 0.11508, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0925 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.3120 - lr: 0.0010 - 281ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.11508 to 0.10550, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0952 - val_loss: 0.1055 - val_mse: 0.1055 - val_mae: 0.2971 - lr: 0.0010 - 250ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.10550 to 0.10018, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0908 - val_loss: 0.1002 - val_mse: 0.1002 - val_mae: 0.2888 - lr: 0.0010 - 248ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.10018 to 0.09417, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0851 - val_loss: 0.0942 - val_mse: 0.0942 - val_mae: 0.2790 - lr: 0.0010 - 270ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.09417
43/43 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0872 - val_loss: 0.0966 - val_mse: 0.0966 - val_mae: 0.2830 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.09417
43/43 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0914 - val_loss: 0.1021 - val_mse: 0.1021 - val_mae: 0.2916 - lr: 0.0010 - 228ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.09417
43/43 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0891 - val_loss: 0.1058 - val_mse: 0.1058 - val_mae: 0.2974 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.09417
43/43 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0877 - val_loss: 0.0944 - val_mse: 0.0944 - val_mae: 0.2785 - lr: 0.0010 - 241ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.09417 to 0.08504, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0833 - val_loss: 0.0850 - val_mse: 0.0850 - val_mae: 0.2623 - lr: 0.0010 - 245ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.08504
43/43 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0820 - val_loss: 0.0960 - val_mse: 0.0960 - val_mae: 0.2798 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.08504
43/43 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0851 - val_loss: 0.1018 - val_mse: 0.1018 - val_mae: 0.2887 - lr: 0.0010 - 229ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.08504
43/43 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0831 - val_loss: 0.1003 - val_mse: 0.1003 - val_mae: 0.2854 - lr: 0.0010 - 245ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.08504
43/43 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0820 - val_loss: 0.0957 - val_mse: 0.0957 - val_mae: 0.2777 - lr: 0.0010 - 235ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.08504 to 0.07169, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0802 - val_loss: 0.0717 - val_mse: 0.0717 - val_mae: 0.2358 - lr: 0.0010 - 256ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.07169
43/43 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0801 - val_loss: 0.0895 - val_mse: 0.0895 - val_mae: 0.2674 - lr: 0.0010 - 224ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.07169
43/43 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0811 - val_loss: 0.0840 - val_mse: 0.0840 - val_mae: 0.2573 - lr: 0.0010 - 237ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.07169
43/43 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0759 - val_loss: 0.0847 - val_mse: 0.0847 - val_mae: 0.2584 - lr: 0.0010 - 229ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.07169
43/43 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0795 - val_loss: 0.0867 - val_mse: 0.0867 - val_mae: 0.2619 - lr: 0.0010 - 234ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00025: val_loss did not improve from 0.07169
43/43 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0783 - val_loss: 0.0803 - val_mse: 0.0803 - val_mae: 0.2507 - lr: 0.0010 - 238ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss improved from 0.07169 to 0.06618, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0968 - val_loss: 0.0662 - val_mse: 0.0662 - val_mae: 0.2247 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss improved from 0.06618 to 0.06242, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0630 - val_loss: 0.0624 - val_mse: 0.0624 - val_mae: 0.2174 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.06242 to 0.06165, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0563 - val_loss: 0.0617 - val_mse: 0.0617 - val_mae: 0.2158 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0551 - val_loss: 0.0620 - val_mse: 0.0620 - val_mae: 0.2163 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.0622 - val_mse: 0.0622 - val_mae: 0.2166 - lr: 1.0000e-04 - 240ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0557 - val_loss: 0.0626 - val_mse: 0.0626 - val_mae: 0.2172 - lr: 1.0000e-04 - 232ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.0637 - val_mse: 0.0637 - val_mae: 0.2193 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00033: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0542 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.2201 - lr: 1.0000e-04 - 236ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.2202 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.2204 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0518 - val_loss: 0.0644 - val_mse: 0.0644 - val_mae: 0.2205 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.2204 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00038: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.2203 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.0644 - val_mse: 0.0644 - val_mae: 0.2205 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0504 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.2207 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0507 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.2207 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0542 - val_loss: 0.0644 - val_mse: 0.0644 - val_mae: 0.2205 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0508 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.2204 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.2200 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0546 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.2201 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0502 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.2199 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0503 - val_loss: 0.0640 - val_mse: 0.0640 - val_mae: 0.2198 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0518 - val_loss: 0.0640 - val_mse: 0.0640 - val_mae: 0.2197 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0639 - val_mse: 0.0639 - val_mae: 0.2194 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0637 - val_mse: 0.0637 - val_mae: 0.2191 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0532 - val_loss: 0.0637 - val_mse: 0.0637 - val_mae: 0.2190 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0638 - val_mse: 0.0638 - val_mae: 0.2192 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.2199 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0505 - val_loss: 0.0644 - val_mse: 0.0644 - val_mae: 0.2204 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0525 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.2205 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0646 - val_mse: 0.0646 - val_mae: 0.2207 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0527 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.2206 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0498 - val_loss: 0.0646 - val_mse: 0.0646 - val_mae: 0.2208 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2215 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.0652 - val_mse: 0.0652 - val_mae: 0.2218 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0524 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2214 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2213 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0652 - val_mse: 0.0652 - val_mae: 0.2217 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0508 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2215 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2215 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0496 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2214 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0528 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.2214 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0498 - val_loss: 0.0651 - val_mse: 0.0651 - val_mae: 0.2217 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0649 - val_mse: 0.0649 - val_mae: 0.2211 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0489 - val_loss: 0.0648 - val_mse: 0.0648 - val_mae: 0.2210 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0648 - val_mse: 0.0648 - val_mae: 0.2209 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.2207 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0649 - val_mse: 0.0649 - val_mae: 0.2212 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0648 - val_mse: 0.0648 - val_mae: 0.2208 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.2202 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0508 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.2206 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0648 - val_mse: 0.0648 - val_mae: 0.2208 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.06165
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.2207 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 00078: early stopping
SMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 123.96893050522607
RMSE: 11.134133576764116
MAPE: 9.602398807260117
EMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 63.919262026708296
RMSE: 7.994952284204596
MAPE: 6.479287961204322
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 24.651058301828286
RMSE: 4.9649832126431495
MAPE: 3.9308905500983484
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 156.8635759091866
RMSE: 12.524518989134338
MAPE: 11.387412907589542
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 59.19746610115158
RMSE: 7.69398895899595
MAPE: 6.776737847872761
MIDPOINT
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 46.490023595118274
RMSE: 6.818359303756166
MAPE: 5.538801606657957
T3
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 51.12% Accuracy
MSE: 57.75776139981352
RMSE: 7.59985272224492
MAPE: 6.172107202063374
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.40 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.21 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.04 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.37 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.59 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.985 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 13:48:39 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.07907, saving model to LSTM3.h5
90/90 - 3s - loss: 0.0502 - mse: 0.0502 - mae: 0.1781 - val_loss: 0.0791 - val_mse: 0.0791 - val_mae: 0.2511 - lr: 0.0010 - 3s/epoch - 35ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.07907 to 0.05916, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1027 - val_loss: 0.0592 - val_mse: 0.0592 - val_mae: 0.2091 - lr: 0.0010 - 478ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05916
90/90 - 0s - loss: 0.0210 - mse: 0.0210 - mae: 0.1174 - val_loss: 0.0799 - val_mse: 0.0799 - val_mae: 0.2511 - lr: 0.0010 - 448ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.05916 to 0.04150, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0451 - mse: 0.0451 - mae: 0.1690 - val_loss: 0.0415 - val_mse: 0.0415 - val_mae: 0.1713 - lr: 0.0010 - 473ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04150
90/90 - 0s - loss: 0.0349 - mse: 0.0349 - mae: 0.1438 - val_loss: 0.0732 - val_mse: 0.0732 - val_mae: 0.2349 - lr: 0.0010 - 442ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.04150 to 0.02530, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0180 - mse: 0.0180 - mae: 0.0969 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1290 - lr: 0.0010 - 466ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.02530
90/90 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0747 - val_loss: 0.0759 - val_mse: 0.0759 - val_mae: 0.2374 - lr: 0.0010 - 449ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.02530 to 0.02285, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0833 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1208 - lr: 0.0010 - 481ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.02285
90/90 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0738 - val_loss: 0.0928 - val_mse: 0.0928 - val_mae: 0.2619 - lr: 0.0010 - 446ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.02285 to 0.02038, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0775 - val_loss: 0.0204 - val_mse: 0.0204 - val_mae: 0.1136 - lr: 0.0010 - 482ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.02038
90/90 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0707 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2291 - lr: 0.0010 - 449ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.02038 to 0.01891, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0779 - val_loss: 0.0189 - val_mse: 0.0189 - val_mae: 0.1109 - lr: 0.0010 - 477ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.01891
90/90 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0713 - val_loss: 0.0716 - val_mse: 0.0716 - val_mae: 0.2254 - lr: 0.0010 - 456ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.01891 to 0.01859, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0735 - val_loss: 0.0186 - val_mse: 0.0186 - val_mae: 0.1106 - lr: 0.0010 - 485ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01859
90/90 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0681 - val_loss: 0.0556 - val_mse: 0.0556 - val_mae: 0.1956 - lr: 0.0010 - 439ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.01859 to 0.01584, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0700 - val_loss: 0.0158 - val_mse: 0.0158 - val_mae: 0.1005 - lr: 0.0010 - 471ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0659 - val_loss: 0.0572 - val_mse: 0.0572 - val_mae: 0.1978 - lr: 0.0010 - 446ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0708 - val_loss: 0.0201 - val_mse: 0.0201 - val_mae: 0.1110 - lr: 0.0010 - 452ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0625 - val_loss: 0.0532 - val_mse: 0.0532 - val_mae: 0.1895 - lr: 0.0010 - 449ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0700 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1119 - lr: 0.0010 - 435ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00021: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0683 - val_loss: 0.0607 - val_mse: 0.0607 - val_mae: 0.2059 - lr: 0.0010 - 462ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0192 - mse: 0.0192 - mae: 0.1141 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1407 - lr: 1.0000e-04 - 444ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0650 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1348 - lr: 1.0000e-04 - 459ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0606 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1336 - lr: 1.0000e-04 - 446ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0605 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1341 - lr: 1.0000e-04 - 472ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00026: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0596 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1346 - lr: 1.0000e-04 - 449ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1333 - lr: 1.0000e-05 - 467ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0533 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1326 - lr: 1.0000e-05 - 439ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1319 - lr: 1.0000e-05 - 450ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1314 - lr: 1.0000e-05 - 463ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00031: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1311 - lr: 1.0000e-05 - 447ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1311 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1313 - lr: 1.0000e-05 - 432ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1317 - lr: 1.0000e-05 - 461ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0540 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1318 - lr: 1.0000e-05 - 444ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0559 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1318 - lr: 1.0000e-05 - 439ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1317 - lr: 1.0000e-05 - 460ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1321 - lr: 1.0000e-05 - 463ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1325 - lr: 1.0000e-05 - 442ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1328 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0522 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1326 - lr: 1.0000e-05 - 458ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1334 - lr: 1.0000e-05 - 447ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1338 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1337 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0514 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1345 - lr: 1.0000e-05 - 458ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0529 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1348 - lr: 1.0000e-05 - 449ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1360 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1360 - lr: 1.0000e-05 - 451ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1368 - lr: 1.0000e-05 - 452ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0501 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1371 - lr: 1.0000e-05 - 444ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0519 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1381 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1381 - lr: 1.0000e-05 - 462ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0518 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1385 - lr: 1.0000e-05 - 450ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1382 - lr: 1.0000e-05 - 466ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1385 - lr: 1.0000e-05 - 433ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0507 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1376 - lr: 1.0000e-05 - 452ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0345 - val_mse: 0.0345 - val_mae: 0.1385 - lr: 1.0000e-05 - 442ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1398 - lr: 1.0000e-05 - 450ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0503 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1408 - lr: 1.0000e-05 - 432ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0356 - val_mse: 0.0356 - val_mae: 0.1410 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0515 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1408 - lr: 1.0000e-05 - 469ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0521 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1400 - lr: 1.0000e-05 - 466ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0501 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1400 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0505 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1410 - lr: 1.0000e-05 - 444ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1423 - lr: 1.0000e-05 - 451ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.01584
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0527 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1431 - lr: 1.0000e-05 - 445ms/epoch - 5ms/step
Epoch 00066: early stopping
SMA Prediction vs Close: 52.61% Accuracy Prediction vs Prediction: 52.24% Accuracy MSE: 123.96893050522607 RMSE: 11.134133576764116 MAPE: 9.602398807260117 EMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 63.919262026708296 RMSE: 7.994952284204596 MAPE: 6.479287961204322 WMA Prediction vs Close: 54.85% Accuracy Prediction vs Prediction: 50.75% Accuracy MSE: 24.651058301828286 RMSE: 4.9649832126431495 MAPE: 3.9308905500983484 DEMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 156.8635759091866 RMSE: 12.524518989134338 MAPE: 11.387412907589542 KAMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 59.19746610115158 RMSE: 7.69398895899595 MAPE: 6.776737847872761 MIDPOINT Prediction vs Close: 52.61% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 46.490023595118274 RMSE: 6.818359303756166 MAPE: 5.538801606657957 T3 Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 51.12% Accuracy MSE: 57.75776139981352 RMSE: 7.59985272224492 MAPE: 6.172107202063374 TEMA Prediction vs Close: 50.75% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 61.81638170069383 RMSE: 7.862339454684835 MAPE: 7.157520441443416 Runtime: mins: 12.264119731699997
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment3.png to Experiment3 (1).png
img = cv2.imread('Experiment3.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fb3f5496390>
with open('simulation3_data.json') as json_file:
simulation3 = json.load(json_file)
fileimg = 'Experiment3'
for i in range(len(list(simulation3.keys()))):
SIM = list(simulation3.keys())[i]
plot_train(simulation3,SIM)
plot_test(simulation3,SIM)
----- Train RMSE for SMA ----- 8.930823939228564 ----- Train_MSE_LSTM for SMA ----- 79.759616233498 ----- Train MAE LSTM for SMA ----- 7.745279391393028
----- Test RMSE for SMA----- 11.134133576764116 ----- Test_MSE_LSTM for SMA----- 123.96893050522607 ----- Test_MAE_LSTM for SMA----- 9.602398807260117
----- Train RMSE for EMA ----- 10.565146653068828 ----- Train_MSE_LSTM for EMA ----- 111.62232380085146 ----- Train MAE LSTM for EMA ----- 9.411117943188193
----- Test RMSE for EMA----- 7.994952284204596 ----- Test_MSE_LSTM for EMA----- 63.919262026708296 ----- Test_MAE_LSTM for EMA----- 6.479287961204322
----- Train RMSE for WMA ----- 10.832264554488258 ----- Train_MSE_LSTM for WMA ----- 117.3379553784227 ----- Train MAE LSTM for WMA ----- 9.744005795817195
----- Test RMSE for WMA----- 4.9649832126431495 ----- Test_MSE_LSTM for WMA----- 24.651058301828286 ----- Test_MAE_LSTM for WMA----- 3.9308905500983484
----- Train RMSE for DEMA ----- 12.56480336163703 ----- Train_MSE_LSTM for DEMA ----- 157.8742835166052 ----- Train MAE LSTM for DEMA ----- 11.433644929764755
----- Test RMSE for DEMA----- 12.524518989134338 ----- Test_MSE_LSTM for DEMA----- 156.8635759091866 ----- Test_MAE_LSTM for DEMA----- 11.387412907589542
----- Train RMSE for KAMA ----- 10.593820426171298 ----- Train_MSE_LSTM for KAMA ----- 112.22903122196423 ----- Train MAE LSTM for KAMA ----- 9.500249049755386
----- Test RMSE for KAMA----- 7.69398895899595 ----- Test_MSE_LSTM for KAMA----- 59.19746610115158 ----- Test_MAE_LSTM for KAMA----- 6.776737847872761
----- Train RMSE for MIDPOINT ----- 9.5736663214708 ----- Train_MSE_LSTM for MIDPOINT ----- 91.65508683486424 ----- Train MAE LSTM for MIDPOINT ----- 8.498581498536272
----- Test RMSE for MIDPOINT----- 6.818359303756166 ----- Test_MSE_LSTM for MIDPOINT----- 46.490023595118274 ----- Test_MAE_LSTM for MIDPOINT----- 5.538801606657957
----- Train RMSE for T3 ----- 12.412205084380062 ----- Train_MSE_LSTM for T3 ----- 154.06283505671027 ----- Train MAE LSTM for T3 ----- 11.289339550955239
----- Test RMSE for T3----- 7.59985272224492 ----- Test_MSE_LSTM for T3----- 57.75776139981352 ----- Test_MAE_LSTM for T3----- 6.172107202063374
----- Train RMSE for TEMA ----- 7.318337811184663 ----- Train_MSE_LSTM for TEMA ----- 53.55806831861513 ----- Train MAE LSTM for TEMA ----- 4.992749833584864
----- Test RMSE for TEMA----- 7.862339454684835 ----- Test_MSE_LSTM for TEMA----- 61.81638170069383 ----- Test_MAE_LSTM for TEMA----- 7.157520441443416
From the above experiments it is evident that with Higher moving averages the loss plots show unreoresented data and underfitting, hence keeping only the MA's that have smaller periods like T3 OR TRIMA. Going forward EMA, WMA & DEMA will be ignored.
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# # Option 1
# # Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # Option 3
# # define custom activation
# #
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 4
# Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
model.add(LSTM(units=int(lstm_len/2)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam')
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM4.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = (y_scaler.inverse_transform(predictiontr)-det).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte =( y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation4 = {}
imgfile = 'Experiment4'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation4[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation4_data.json', 'w') as fp:
json.dump(simulation4, fp)
for ma in simulation4.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation4[ma]['final']['mse'],
'\nRMSE:\t', simulation4[ma]['final']['rmse'],
'\nMAPE:\t', simulation4[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.39 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.15 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.06 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.56 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.58 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.17 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.034 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 13:58:09 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04752, saving model to LSTM4.h5
48/48 - 5s - loss: 1.3281 - val_loss: 0.0475 - lr: 0.0010 - 5s/epoch - 98ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04752
48/48 - 0s - loss: 1.1972 - val_loss: 0.0492 - lr: 0.0010 - 309ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04752
48/48 - 0s - loss: 1.0857 - val_loss: 0.0527 - lr: 0.0010 - 310ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04752
48/48 - 0s - loss: 1.0040 - val_loss: 0.0580 - lr: 0.0010 - 307ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.9411 - val_loss: 0.0646 - lr: 0.0010 - 323ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8869 - val_loss: 0.0711 - lr: 0.0010 - 311ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8549 - val_loss: 0.0718 - lr: 1.0000e-04 - 308ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8502 - val_loss: 0.0724 - lr: 1.0000e-04 - 311ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8456 - val_loss: 0.0730 - lr: 1.0000e-04 - 302ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8410 - val_loss: 0.0737 - lr: 1.0000e-04 - 313ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8366 - val_loss: 0.0744 - lr: 1.0000e-04 - 317ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8339 - val_loss: 0.0744 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8334 - val_loss: 0.0745 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8330 - val_loss: 0.0746 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8326 - val_loss: 0.0747 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8321 - val_loss: 0.0747 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8317 - val_loss: 0.0748 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8312 - val_loss: 0.0749 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8308 - val_loss: 0.0750 - lr: 1.0000e-05 - 299ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8303 - val_loss: 0.0751 - lr: 1.0000e-05 - 304ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8298 - val_loss: 0.0751 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8294 - val_loss: 0.0752 - lr: 1.0000e-05 - 298ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8289 - val_loss: 0.0753 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8285 - val_loss: 0.0754 - lr: 1.0000e-05 - 305ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8280 - val_loss: 0.0755 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8275 - val_loss: 0.0756 - lr: 1.0000e-05 - 296ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8271 - val_loss: 0.0757 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8266 - val_loss: 0.0758 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8261 - val_loss: 0.0758 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8257 - val_loss: 0.0759 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8252 - val_loss: 0.0760 - lr: 1.0000e-05 - 308ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8248 - val_loss: 0.0761 - lr: 1.0000e-05 - 304ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8243 - val_loss: 0.0762 - lr: 1.0000e-05 - 326ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8238 - val_loss: 0.0763 - lr: 1.0000e-05 - 304ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8234 - val_loss: 0.0764 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8229 - val_loss: 0.0765 - lr: 1.0000e-05 - 306ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8224 - val_loss: 0.0766 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8220 - val_loss: 0.0767 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8215 - val_loss: 0.0768 - lr: 1.0000e-05 - 312ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8210 - val_loss: 0.0769 - lr: 1.0000e-05 - 308ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8206 - val_loss: 0.0770 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8201 - val_loss: 0.0771 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8196 - val_loss: 0.0772 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8192 - val_loss: 0.0773 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8187 - val_loss: 0.0774 - lr: 1.0000e-05 - 302ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8183 - val_loss: 0.0775 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8178 - val_loss: 0.0777 - lr: 1.0000e-05 - 308ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8173 - val_loss: 0.0778 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8169 - val_loss: 0.0779 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8164 - val_loss: 0.0780 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04752
48/48 - 0s - loss: 0.8160 - val_loss: 0.0781 - lr: 1.0000e-05 - 304ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.11 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.02 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.52 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.24 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.543 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 13:59:42 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04897, saving model to LSTM4.h5
16/16 - 5s - loss: 1.3851 - val_loss: 0.0490 - lr: 0.0010 - 5s/epoch - 286ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.3596 - val_loss: 0.0500 - lr: 0.0010 - 125ms/epoch - 8ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.3341 - val_loss: 0.0511 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.3075 - val_loss: 0.0520 - lr: 0.0010 - 121ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2797 - val_loss: 0.0529 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2518 - val_loss: 0.0539 - lr: 0.0010 - 121ms/epoch - 8ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2342 - val_loss: 0.0540 - lr: 1.0000e-04 - 117ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2316 - val_loss: 0.0541 - lr: 1.0000e-04 - 119ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2290 - val_loss: 0.0542 - lr: 1.0000e-04 - 116ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2265 - val_loss: 0.0543 - lr: 1.0000e-04 - 124ms/epoch - 8ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2240 - val_loss: 0.0544 - lr: 1.0000e-04 - 129ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2223 - val_loss: 0.0545 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2221 - val_loss: 0.0545 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2219 - val_loss: 0.0545 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2216 - val_loss: 0.0545 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2214 - val_loss: 0.0545 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2211 - val_loss: 0.0545 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2209 - val_loss: 0.0545 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2206 - val_loss: 0.0545 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2204 - val_loss: 0.0545 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2201 - val_loss: 0.0546 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2199 - val_loss: 0.0546 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2196 - val_loss: 0.0546 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2194 - val_loss: 0.0546 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2192 - val_loss: 0.0546 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2189 - val_loss: 0.0546 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2187 - val_loss: 0.0546 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2184 - val_loss: 0.0546 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2182 - val_loss: 0.0547 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2179 - val_loss: 0.0547 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2177 - val_loss: 0.0547 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2174 - val_loss: 0.0547 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2172 - val_loss: 0.0547 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2170 - val_loss: 0.0547 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2167 - val_loss: 0.0547 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2165 - val_loss: 0.0547 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2162 - val_loss: 0.0547 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2160 - val_loss: 0.0548 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2157 - val_loss: 0.0548 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2155 - val_loss: 0.0548 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2153 - val_loss: 0.0548 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2150 - val_loss: 0.0548 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2148 - val_loss: 0.0548 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2145 - val_loss: 0.0548 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2143 - val_loss: 0.0548 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2140 - val_loss: 0.0549 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2138 - val_loss: 0.0549 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2136 - val_loss: 0.0549 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2133 - val_loss: 0.0549 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2131 - val_loss: 0.0549 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04897
16/16 - 0s - loss: 1.2128 - val_loss: 0.0549 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.69312385194829
RMSE: 6.057484944426053
MAPE: 4.755707959713801
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.49 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.14 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.749 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 14:01:02 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.03613, saving model to LSTM4.h5
17/17 - 5s - loss: 1.4348 - val_loss: 0.0361 - lr: 0.0010 - 5s/epoch - 304ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.4035 - val_loss: 0.0367 - lr: 0.0010 - 295ms/epoch - 17ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.3693 - val_loss: 0.0376 - lr: 0.0010 - 278ms/epoch - 16ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.3307 - val_loss: 0.0388 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.2897 - val_loss: 0.0401 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.2464 - val_loss: 0.0413 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.2140 - val_loss: 0.0414 - lr: 1.0000e-04 - 128ms/epoch - 8ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.2087 - val_loss: 0.0414 - lr: 1.0000e-04 - 127ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.2035 - val_loss: 0.0415 - lr: 1.0000e-04 - 128ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1983 - val_loss: 0.0416 - lr: 1.0000e-04 - 132ms/epoch - 8ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1931 - val_loss: 0.0416 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1897 - val_loss: 0.0416 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1892 - val_loss: 0.0416 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1887 - val_loss: 0.0416 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1882 - val_loss: 0.0416 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1877 - val_loss: 0.0416 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1872 - val_loss: 0.0417 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1867 - val_loss: 0.0417 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1862 - val_loss: 0.0417 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1858 - val_loss: 0.0417 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1853 - val_loss: 0.0417 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1848 - val_loss: 0.0417 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1843 - val_loss: 0.0417 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1839 - val_loss: 0.0417 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1834 - val_loss: 0.0417 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1829 - val_loss: 0.0417 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1825 - val_loss: 0.0417 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1820 - val_loss: 0.0417 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1815 - val_loss: 0.0417 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1810 - val_loss: 0.0417 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1806 - val_loss: 0.0418 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1801 - val_loss: 0.0418 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1796 - val_loss: 0.0418 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1792 - val_loss: 0.0418 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1787 - val_loss: 0.0418 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1782 - val_loss: 0.0418 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1778 - val_loss: 0.0418 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1773 - val_loss: 0.0418 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1769 - val_loss: 0.0418 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1764 - val_loss: 0.0418 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1759 - val_loss: 0.0418 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1755 - val_loss: 0.0418 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1750 - val_loss: 0.0418 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1745 - val_loss: 0.0418 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1741 - val_loss: 0.0418 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1736 - val_loss: 0.0419 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1731 - val_loss: 0.0419 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1727 - val_loss: 0.0419 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1722 - val_loss: 0.0419 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1717 - val_loss: 0.0419 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03613
17/17 - 0s - loss: 1.1713 - val_loss: 0.0419 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.69312385194829
RMSE: 6.057484944426053
MAPE: 4.755707959713801
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 61.47074835668693
RMSE: 7.8403283321992925
MAPE: 6.468176158698829
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.17 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.73 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.15 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.921 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 14:02:19 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.07261, saving model to LSTM4.h5
10/10 - 5s - loss: 1.5344 - val_loss: 0.0726 - lr: 0.0010 - 5s/epoch - 458ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.07261 to 0.07257, saving model to LSTM4.h5
10/10 - 0s - loss: 1.5110 - val_loss: 0.0726 - lr: 0.0010 - 101ms/epoch - 10ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.07257 to 0.07170, saving model to LSTM4.h5
10/10 - 0s - loss: 1.4835 - val_loss: 0.0717 - lr: 0.0010 - 100ms/epoch - 10ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.07170 to 0.07007, saving model to LSTM4.h5
10/10 - 0s - loss: 1.4466 - val_loss: 0.0701 - lr: 0.0010 - 106ms/epoch - 11ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.07007 to 0.06801, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3954 - val_loss: 0.0680 - lr: 0.0010 - 108ms/epoch - 11ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.06801 to 0.06600, saving model to LSTM4.h5
10/10 - 0s - loss: 1.3267 - val_loss: 0.0660 - lr: 0.0010 - 103ms/epoch - 10ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.06600 to 0.06432, saving model to LSTM4.h5
10/10 - 0s - loss: 1.2426 - val_loss: 0.0643 - lr: 0.0010 - 110ms/epoch - 11ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.06432 to 0.06309, saving model to LSTM4.h5
10/10 - 0s - loss: 1.1568 - val_loss: 0.0631 - lr: 0.0010 - 110ms/epoch - 11ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.06309 to 0.06233, saving model to LSTM4.h5
10/10 - 0s - loss: 1.0839 - val_loss: 0.0623 - lr: 0.0010 - 108ms/epoch - 11ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.06233 to 0.06201, saving model to LSTM4.h5
10/10 - 0s - loss: 1.0279 - val_loss: 0.0620 - lr: 0.0010 - 109ms/epoch - 11ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.9851 - val_loss: 0.0620 - lr: 0.0010 - 85ms/epoch - 9ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.9513 - val_loss: 0.0623 - lr: 0.0010 - 87ms/epoch - 9ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.9241 - val_loss: 0.0628 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.9016 - val_loss: 0.0634 - lr: 0.0010 - 83ms/epoch - 8ms/step
Epoch 15/500
Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00015: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8828 - val_loss: 0.0641 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8715 - val_loss: 0.0641 - lr: 1.0000e-04 - 82ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8700 - val_loss: 0.0642 - lr: 1.0000e-04 - 90ms/epoch - 9ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8686 - val_loss: 0.0643 - lr: 1.0000e-04 - 91ms/epoch - 9ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8672 - val_loss: 0.0644 - lr: 1.0000e-04 - 86ms/epoch - 9ms/step
Epoch 20/500
Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00020: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8658 - val_loss: 0.0645 - lr: 1.0000e-04 - 91ms/epoch - 9ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8648 - val_loss: 0.0645 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8647 - val_loss: 0.0645 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8645 - val_loss: 0.0645 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8644 - val_loss: 0.0645 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00025: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8643 - val_loss: 0.0645 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8641 - val_loss: 0.0645 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8640 - val_loss: 0.0645 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8638 - val_loss: 0.0645 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8637 - val_loss: 0.0645 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8636 - val_loss: 0.0645 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8634 - val_loss: 0.0646 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8633 - val_loss: 0.0646 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8631 - val_loss: 0.0646 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8630 - val_loss: 0.0646 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8628 - val_loss: 0.0646 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8627 - val_loss: 0.0646 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8625 - val_loss: 0.0646 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8624 - val_loss: 0.0646 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8622 - val_loss: 0.0646 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8621 - val_loss: 0.0646 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8620 - val_loss: 0.0647 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8618 - val_loss: 0.0647 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8617 - val_loss: 0.0647 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8615 - val_loss: 0.0647 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8614 - val_loss: 0.0647 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8612 - val_loss: 0.0647 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8611 - val_loss: 0.0647 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8609 - val_loss: 0.0647 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8608 - val_loss: 0.0647 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8606 - val_loss: 0.0647 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8605 - val_loss: 0.0647 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8603 - val_loss: 0.0648 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8602 - val_loss: 0.0648 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8600 - val_loss: 0.0648 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8598 - val_loss: 0.0648 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8597 - val_loss: 0.0648 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8595 - val_loss: 0.0648 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8594 - val_loss: 0.0648 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8592 - val_loss: 0.0648 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.06201
10/10 - 0s - loss: 0.8591 - val_loss: 0.0648 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.69312385194829
RMSE: 6.057484944426053
MAPE: 4.755707959713801
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 61.47074835668693
RMSE: 7.8403283321992925
MAPE: 6.468176158698829
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 114.21230424130383
RMSE: 10.687015684525958
MAPE: 9.305044543155903
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.32 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.20 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.11 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.43 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.58 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.26 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.022 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 14:03:40 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05179, saving model to LSTM4.h5
45/45 - 5s - loss: 1.4217 - val_loss: 0.0518 - lr: 0.0010 - 5s/epoch - 109ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.3716 - val_loss: 0.0538 - lr: 0.0010 - 296ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.3061 - val_loss: 0.0555 - lr: 0.0010 - 289ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.2392 - val_loss: 0.0574 - lr: 0.0010 - 287ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.1760 - val_loss: 0.0601 - lr: 0.0010 - 300ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.1169 - val_loss: 0.0640 - lr: 0.0010 - 298ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0830 - val_loss: 0.0644 - lr: 1.0000e-04 - 292ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0781 - val_loss: 0.0649 - lr: 1.0000e-04 - 290ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0734 - val_loss: 0.0653 - lr: 1.0000e-04 - 302ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0688 - val_loss: 0.0658 - lr: 1.0000e-04 - 293ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0643 - val_loss: 0.0662 - lr: 1.0000e-04 - 299ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0615 - val_loss: 0.0663 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0611 - val_loss: 0.0663 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0606 - val_loss: 0.0664 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0602 - val_loss: 0.0664 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0598 - val_loss: 0.0665 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0593 - val_loss: 0.0665 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0589 - val_loss: 0.0666 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0585 - val_loss: 0.0666 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0580 - val_loss: 0.0667 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0576 - val_loss: 0.0667 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0572 - val_loss: 0.0668 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0567 - val_loss: 0.0668 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0563 - val_loss: 0.0669 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0558 - val_loss: 0.0669 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0554 - val_loss: 0.0670 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0550 - val_loss: 0.0670 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0545 - val_loss: 0.0671 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0541 - val_loss: 0.0671 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0537 - val_loss: 0.0672 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0532 - val_loss: 0.0673 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0528 - val_loss: 0.0673 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0524 - val_loss: 0.0674 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0519 - val_loss: 0.0674 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0515 - val_loss: 0.0675 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0511 - val_loss: 0.0675 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0506 - val_loss: 0.0676 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0502 - val_loss: 0.0676 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0498 - val_loss: 0.0677 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0494 - val_loss: 0.0678 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0489 - val_loss: 0.0678 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0485 - val_loss: 0.0679 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0481 - val_loss: 0.0679 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0476 - val_loss: 0.0680 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0472 - val_loss: 0.0680 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0468 - val_loss: 0.0681 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0464 - val_loss: 0.0682 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0459 - val_loss: 0.0682 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0455 - val_loss: 0.0683 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0451 - val_loss: 0.0683 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05179
45/45 - 0s - loss: 1.0447 - val_loss: 0.0684 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.69312385194829
RMSE: 6.057484944426053
MAPE: 4.755707959713801
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 61.47074835668693
RMSE: 7.8403283321992925
MAPE: 6.468176158698829
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 114.21230424130383
RMSE: 10.687015684525958
MAPE: 9.305044543155903
KAMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 21.57120658320832
RMSE: 4.6444813040002995
MAPE: 3.6837316829247877
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.32 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.16 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.12 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.51 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.65 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.16 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.035 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 14:05:16 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.06664, saving model to LSTM4.h5
58/58 - 6s - loss: 1.4317 - val_loss: 0.0666 - lr: 0.0010 - 6s/epoch - 96ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.06664
58/58 - 0s - loss: 1.2922 - val_loss: 0.0704 - lr: 0.0010 - 382ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.06664
58/58 - 0s - loss: 1.0995 - val_loss: 0.0737 - lr: 0.0010 - 381ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.9549 - val_loss: 0.0793 - lr: 0.0010 - 368ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.8739 - val_loss: 0.0852 - lr: 0.0010 - 385ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.8200 - val_loss: 0.0912 - lr: 0.0010 - 357ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7942 - val_loss: 0.0918 - lr: 1.0000e-04 - 367ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7906 - val_loss: 0.0924 - lr: 1.0000e-04 - 354ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7870 - val_loss: 0.0930 - lr: 1.0000e-04 - 375ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7835 - val_loss: 0.0937 - lr: 1.0000e-04 - 367ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7800 - val_loss: 0.0944 - lr: 1.0000e-04 - 361ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7778 - val_loss: 0.0944 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7775 - val_loss: 0.0945 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7772 - val_loss: 0.0946 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7768 - val_loss: 0.0947 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7764 - val_loss: 0.0947 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7761 - val_loss: 0.0948 - lr: 1.0000e-05 - 383ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7757 - val_loss: 0.0949 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7753 - val_loss: 0.0950 - lr: 1.0000e-05 - 363ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7750 - val_loss: 0.0950 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7746 - val_loss: 0.0951 - lr: 1.0000e-05 - 377ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7742 - val_loss: 0.0952 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7739 - val_loss: 0.0953 - lr: 1.0000e-05 - 373ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7735 - val_loss: 0.0954 - lr: 1.0000e-05 - 360ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7731 - val_loss: 0.0954 - lr: 1.0000e-05 - 378ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7727 - val_loss: 0.0955 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7723 - val_loss: 0.0956 - lr: 1.0000e-05 - 385ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7720 - val_loss: 0.0957 - lr: 1.0000e-05 - 373ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7716 - val_loss: 0.0958 - lr: 1.0000e-05 - 367ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7712 - val_loss: 0.0959 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7708 - val_loss: 0.0960 - lr: 1.0000e-05 - 367ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7704 - val_loss: 0.0960 - lr: 1.0000e-05 - 376ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7700 - val_loss: 0.0961 - lr: 1.0000e-05 - 377ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7696 - val_loss: 0.0962 - lr: 1.0000e-05 - 361ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7693 - val_loss: 0.0963 - lr: 1.0000e-05 - 363ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7689 - val_loss: 0.0964 - lr: 1.0000e-05 - 368ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7685 - val_loss: 0.0965 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7681 - val_loss: 0.0966 - lr: 1.0000e-05 - 354ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7677 - val_loss: 0.0967 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7673 - val_loss: 0.0967 - lr: 1.0000e-05 - 365ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7669 - val_loss: 0.0968 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7665 - val_loss: 0.0969 - lr: 1.0000e-05 - 375ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7662 - val_loss: 0.0970 - lr: 1.0000e-05 - 373ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7658 - val_loss: 0.0971 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7654 - val_loss: 0.0972 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7650 - val_loss: 0.0973 - lr: 1.0000e-05 - 376ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7646 - val_loss: 0.0974 - lr: 1.0000e-05 - 377ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7642 - val_loss: 0.0975 - lr: 1.0000e-05 - 369ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7638 - val_loss: 0.0976 - lr: 1.0000e-05 - 371ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7634 - val_loss: 0.0977 - lr: 1.0000e-05 - 371ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.06664
58/58 - 0s - loss: 0.7631 - val_loss: 0.0977 - lr: 1.0000e-05 - 357ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.69312385194829
RMSE: 6.057484944426053
MAPE: 4.755707959713801
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 61.47074835668693
RMSE: 7.8403283321992925
MAPE: 6.468176158698829
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 114.21230424130383
RMSE: 10.687015684525958
MAPE: 9.305044543155903
KAMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 21.57120658320832
RMSE: 4.6444813040002995
MAPE: 3.6837316829247877
MIDPOINT
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 17.38125304406819
RMSE: 4.169082997982673
MAPE: 3.3993243705608664
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.33 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.02 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.26 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.75 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.47 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.23 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.246 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 14:06:56 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04821, saving model to LSTM4.h5
43/43 - 5s - loss: 1.3631 - val_loss: 0.0482 - lr: 0.0010 - 5s/epoch - 115ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04821
43/43 - 0s - loss: 1.1997 - val_loss: 0.0521 - lr: 0.0010 - 276ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04821
43/43 - 0s - loss: 1.0124 - val_loss: 0.0565 - lr: 0.0010 - 288ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.9002 - val_loss: 0.0608 - lr: 0.0010 - 287ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.8437 - val_loss: 0.0649 - lr: 0.0010 - 281ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.8075 - val_loss: 0.0691 - lr: 0.0010 - 280ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7899 - val_loss: 0.0695 - lr: 1.0000e-04 - 285ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7874 - val_loss: 0.0700 - lr: 1.0000e-04 - 294ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7848 - val_loss: 0.0704 - lr: 1.0000e-04 - 293ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7823 - val_loss: 0.0709 - lr: 1.0000e-04 - 273ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7797 - val_loss: 0.0714 - lr: 1.0000e-04 - 276ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7782 - val_loss: 0.0714 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7779 - val_loss: 0.0715 - lr: 1.0000e-05 - 289ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7776 - val_loss: 0.0715 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7774 - val_loss: 0.0716 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7771 - val_loss: 0.0716 - lr: 1.0000e-05 - 289ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7768 - val_loss: 0.0717 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7766 - val_loss: 0.0718 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7763 - val_loss: 0.0718 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7760 - val_loss: 0.0719 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7757 - val_loss: 0.0719 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7754 - val_loss: 0.0720 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7752 - val_loss: 0.0721 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7749 - val_loss: 0.0721 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7746 - val_loss: 0.0722 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7743 - val_loss: 0.0723 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7740 - val_loss: 0.0723 - lr: 1.0000e-05 - 284ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7737 - val_loss: 0.0724 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7734 - val_loss: 0.0725 - lr: 1.0000e-05 - 289ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7731 - val_loss: 0.0726 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7728 - val_loss: 0.0726 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7725 - val_loss: 0.0727 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7722 - val_loss: 0.0728 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7719 - val_loss: 0.0728 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7716 - val_loss: 0.0729 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7713 - val_loss: 0.0730 - lr: 1.0000e-05 - 286ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7710 - val_loss: 0.0731 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7707 - val_loss: 0.0731 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7704 - val_loss: 0.0732 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7701 - val_loss: 0.0733 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7698 - val_loss: 0.0734 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7695 - val_loss: 0.0735 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7692 - val_loss: 0.0735 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7689 - val_loss: 0.0736 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7686 - val_loss: 0.0737 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7683 - val_loss: 0.0738 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7680 - val_loss: 0.0739 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7677 - val_loss: 0.0740 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7674 - val_loss: 0.0740 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7671 - val_loss: 0.0741 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04821
43/43 - 0s - loss: 0.7668 - val_loss: 0.0742 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 22.0961825771905
RMSE: 4.700657674963207
MAPE: 3.7488296078488137
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.69312385194829
RMSE: 6.057484944426053
MAPE: 4.755707959713801
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 61.47074835668693
RMSE: 7.8403283321992925
MAPE: 6.468176158698829
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 114.21230424130383
RMSE: 10.687015684525958
MAPE: 9.305044543155903
KAMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 21.57120658320832
RMSE: 4.6444813040002995
MAPE: 3.6837316829247877
MIDPOINT
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 17.38125304406819
RMSE: 4.169082997982673
MAPE: 3.3993243705608664
T3
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 60.321913944220896
RMSE: 7.766718351029661
MAPE: 6.200911576902634
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.38 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.03 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.19 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.04 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.11 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.39 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.61 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.23 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.021 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 14:08:25 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05339, saving model to LSTM4.h5
90/90 - 5s - loss: 1.2628 - val_loss: 0.0534 - lr: 0.0010 - 5s/epoch - 56ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.9413 - val_loss: 0.0609 - lr: 0.0010 - 573ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.8281 - val_loss: 0.0685 - lr: 0.0010 - 551ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.7724 - val_loss: 0.0759 - lr: 0.0010 - 561ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.7358 - val_loss: 0.0833 - lr: 0.0010 - 554ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.7085 - val_loss: 0.0906 - lr: 0.0010 - 563ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6946 - val_loss: 0.0914 - lr: 1.0000e-04 - 569ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6925 - val_loss: 0.0922 - lr: 1.0000e-04 - 557ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6903 - val_loss: 0.0930 - lr: 1.0000e-04 - 570ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6881 - val_loss: 0.0938 - lr: 1.0000e-04 - 544ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6859 - val_loss: 0.0947 - lr: 1.0000e-04 - 563ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6845 - val_loss: 0.0948 - lr: 1.0000e-05 - 568ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6843 - val_loss: 0.0949 - lr: 1.0000e-05 - 552ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6840 - val_loss: 0.0950 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6838 - val_loss: 0.0951 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6835 - val_loss: 0.0952 - lr: 1.0000e-05 - 570ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6833 - val_loss: 0.0953 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6830 - val_loss: 0.0954 - lr: 1.0000e-05 - 564ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6828 - val_loss: 0.0956 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6825 - val_loss: 0.0957 - lr: 1.0000e-05 - 555ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6822 - val_loss: 0.0958 - lr: 1.0000e-05 - 547ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6820 - val_loss: 0.0959 - lr: 1.0000e-05 - 559ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6817 - val_loss: 0.0960 - lr: 1.0000e-05 - 563ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6814 - val_loss: 0.0962 - lr: 1.0000e-05 - 547ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6812 - val_loss: 0.0963 - lr: 1.0000e-05 - 551ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6809 - val_loss: 0.0964 - lr: 1.0000e-05 - 546ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6806 - val_loss: 0.0966 - lr: 1.0000e-05 - 551ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6803 - val_loss: 0.0967 - lr: 1.0000e-05 - 546ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6800 - val_loss: 0.0969 - lr: 1.0000e-05 - 548ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6798 - val_loss: 0.0970 - lr: 1.0000e-05 - 551ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6795 - val_loss: 0.0971 - lr: 1.0000e-05 - 548ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6792 - val_loss: 0.0973 - lr: 1.0000e-05 - 542ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6789 - val_loss: 0.0974 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6786 - val_loss: 0.0976 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6783 - val_loss: 0.0977 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6780 - val_loss: 0.0979 - lr: 1.0000e-05 - 556ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6777 - val_loss: 0.0981 - lr: 1.0000e-05 - 552ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6774 - val_loss: 0.0982 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6772 - val_loss: 0.0984 - lr: 1.0000e-05 - 563ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6769 - val_loss: 0.0985 - lr: 1.0000e-05 - 554ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6766 - val_loss: 0.0987 - lr: 1.0000e-05 - 559ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6763 - val_loss: 0.0989 - lr: 1.0000e-05 - 551ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6760 - val_loss: 0.0991 - lr: 1.0000e-05 - 562ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6757 - val_loss: 0.0992 - lr: 1.0000e-05 - 570ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6754 - val_loss: 0.0994 - lr: 1.0000e-05 - 549ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6751 - val_loss: 0.0996 - lr: 1.0000e-05 - 562ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6748 - val_loss: 0.0997 - lr: 1.0000e-05 - 559ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6745 - val_loss: 0.0999 - lr: 1.0000e-05 - 562ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6742 - val_loss: 0.1001 - lr: 1.0000e-05 - 564ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6739 - val_loss: 0.1003 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05339
90/90 - 1s - loss: 0.6736 - val_loss: 0.1005 - lr: 1.0000e-05 - 548ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA Prediction vs Close: 54.48% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 22.0961825771905 RMSE: 4.700657674963207 MAPE: 3.7488296078488137 EMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 36.69312385194829 RMSE: 6.057484944426053 MAPE: 4.755707959713801 WMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 61.47074835668693 RMSE: 7.8403283321992925 MAPE: 6.468176158698829 DEMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 114.21230424130383 RMSE: 10.687015684525958 MAPE: 9.305044543155903 KAMA Prediction vs Close: 55.22% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 21.57120658320832 RMSE: 4.6444813040002995 MAPE: 3.6837316829247877 MIDPOINT Prediction vs Close: 50.37% Accuracy Prediction vs Prediction: 46.27% Accuracy MSE: 17.38125304406819 RMSE: 4.169082997982673 MAPE: 3.3993243705608664 T3 Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 60.321913944220896 RMSE: 7.766718351029661 MAPE: 6.200911576902634 TEMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 25.760985062606874 RMSE: 5.075528057513511 MAPE: 4.549137795705406 Runtime: mins: 11.88320966559999
from google.colab import files
import cv2
uploaded = files.upload()
img = cv2.imread('Experiment4.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fb3f42846d0>
with open('simulation4_data.json') as json_file:
simulation4 = json.load(json_file)
fileimg = 'Experiment4'
for i in range(len(list(simulation4.keys()))):
SIM = list(simulation4.keys())[i]
plot_train(simulation4,SIM)
plot_test(simulation4,SIM)
----- Train RMSE for SMA ----- 2.261242280864947 ----- Train_MSE_LSTM for SMA ----- 5.113216652771308 ----- Train MAE LSTM for SMA ----- 2.2305949702121244
----- Test RMSE for SMA----- 4.700657674963207 ----- Test_MSE_LSTM for SMA----- 22.0961825771905 ----- Test_MAE_LSTM for SMA----- 3.7488296078488137
----- Train RMSE for EMA ----- 4.486578445228125 ----- Train_MSE_LSTM for EMA ----- 20.12938614518562 ----- Train MAE LSTM for EMA ----- 4.457049346206212
----- Test RMSE for EMA----- 6.057484944426053 ----- Test_MSE_LSTM for EMA----- 36.69312385194829 ----- Test_MAE_LSTM for EMA----- 4.755707959713801
----- Train RMSE for WMA ----- 3.9111937815761815 ----- Train_MSE_LSTM for WMA ----- 15.29743679704019 ----- Train MAE LSTM for WMA ----- 3.8069788630646055
----- Test RMSE for WMA----- 7.8403283321992925 ----- Test_MSE_LSTM for WMA----- 61.47074835668693 ----- Test_MAE_LSTM for WMA----- 6.468176158698829
----- Train RMSE for DEMA ----- 1.8914967024294371 ----- Train_MSE_LSTM for DEMA ----- 3.577759775301435 ----- Train MAE LSTM for DEMA ----- 1.0511807545576946
----- Test RMSE for DEMA----- 10.687015684525958 ----- Test_MSE_LSTM for DEMA----- 114.21230424130383 ----- Test_MAE_LSTM for DEMA----- 9.305044543155903
----- Train RMSE for KAMA ----- 0.6095271658293977 ----- Train_MSE_LSTM for KAMA ----- 0.37152336588401813 ----- Train MAE LSTM for KAMA ----- 0.1940257596497488
----- Test RMSE for KAMA----- 4.6444813040002995 ----- Test_MSE_LSTM for KAMA----- 21.57120658320832 ----- Test_MAE_LSTM for KAMA----- 3.6837316829247877
----- Train RMSE for MIDPOINT ----- 3.7511147201647144 ----- Train_MSE_LSTM for MIDPOINT ----- 14.070861643836402 ----- Train MAE LSTM for MIDPOINT ----- 3.721844446541059
----- Test RMSE for MIDPOINT----- 4.169082997982673 ----- Test_MSE_LSTM for MIDPOINT----- 17.38125304406819 ----- Test_MAE_LSTM for MIDPOINT----- 3.3993243705608664
----- Train RMSE for T3 ----- 2.13408676739365 ----- Train_MSE_LSTM for T3 ----- 4.5543263307646775 ----- Train MAE LSTM for T3 ----- 1.9476125499989727
----- Test RMSE for T3----- 7.766718351029661 ----- Test_MSE_LSTM for T3----- 60.321913944220896 ----- Test_MAE_LSTM for T3----- 6.200911576902634
----- Train RMSE for TEMA ----- 1.5059206544231611 ----- Train_MSE_LSTM for TEMA ----- 2.267797017418282 ----- Train MAE LSTM for TEMA ----- 1.3943178417659041
----- Test RMSE for TEMA----- 5.075528057513511 ----- Test_MSE_LSTM for TEMA----- 25.760985062606874 ----- Test_MAE_LSTM for TEMA----- 4.549137795705406
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
model.add(Dense(units=64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
## Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
#
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation5 = {}
imgfile = 'Experiment5'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation5[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation5_data.json', 'w') as fp:
json.dump(simulation5, fp)
for ma in simulation5.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation5[ma]['final']['mse'],
'\nRMSE:\t', simulation5[ma]['final']['rmse'],
'\nMAPE:\t', simulation5[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-14771.778, Time=16.11 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14135.387, Time=7.78 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15280.870, Time=12.96 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15393.475, Time=10.89 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-14981.217, Time=5.56 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14516.868, Time=17.01 sec
ARIMA(0,3,1)(0,0,0)[0] intercept : AIC=-15663.967, Time=12.26 sec
ARIMA(0,3,0)(0,0,0)[0] intercept : AIC=-13838.679, Time=6.48 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-14734.479, Time=7.67 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-14866.409, Time=9.36 sec
ARIMA(1,3,0)(0,0,0)[0] intercept : AIC=-16157.403, Time=17.47 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-14855.623, Time=13.78 sec
ARIMA(2,3,1)(0,0,0)[0] intercept : AIC=-14720.644, Time=14.11 sec
Best model: ARIMA(1,3,0)(0,0,0)[0] intercept
Total fit time: 151.496 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 0) Log Likelihood 8103.701
Date: Sun, 12 Dec 2021 AIC -16157.403
Time: 14:24:44 BIC -16040.132
Sample: 0 HQIC -16112.366
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept -2.802e-06 7.54e-07 -3.714 0.000 -4.28e-06 -1.32e-06
x1 -2.598e-05 0.001 -0.041 0.967 -0.001 0.001
x2 -2.599e-05 0.001 -0.047 0.963 -0.001 0.001
x3 -2.615e-05 0.001 -0.038 0.970 -0.001 0.001
x4 1.0000 0.001 1507.083 0.000 0.999 1.001
x5 -2.485e-05 0.001 -0.038 0.970 -0.001 0.001
x6 -2.807e-05 3.32e-05 -0.845 0.398 -9.32e-05 3.71e-05
x7 -2.593e-05 8.29e-05 -0.313 0.755 -0.000 0.000
x8 0.0019 7.15e-05 26.753 0.000 0.002 0.002
x9 -1.867e-06 0.001 -0.003 0.998 -0.001 0.001
x10 0.0003 0.000 0.644 0.520 -0.001 0.001
x11 -0.0025 8.93e-05 -28.145 0.000 -0.003 -0.002
x12 0.0015 8.06e-05 18.290 0.000 0.001 0.002
x13 -2.61e-05 0.000 -0.076 0.939 -0.001 0.001
x14 -7.719e-05 0.000 -0.374 0.708 -0.000 0.000
x15 -2.829e-05 8.57e-05 -0.330 0.741 -0.000 0.000
x16 -2.424e-05 0.000 -0.142 0.887 -0.000 0.000
x17 -2.292e-05 9.81e-05 -0.234 0.815 -0.000 0.000
x18 -4.39e-05 0.000 -0.429 0.668 -0.000 0.000
x19 -3.005e-05 0.000 -0.293 0.770 -0.000 0.000
x20 4.559e-05 9.36e-05 0.487 0.626 -0.000 0.000
x21 -7.981e-10 0.001 -9.88e-07 1.000 -0.002 0.002
x22 -1.557e-08 0.000 -0.000 1.000 -0.000 0.000
ar.L1 -0.6667 6.95e-05 -9587.073 0.000 -0.667 -0.667
sigma2 1.314e-10 7.8e-11 1.686 0.092 -2.14e-11 2.84e-10
===================================================================================
Ljung-Box (L1) (Q): 90.59 Jarque-Bera (JB): 3138023.60
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 5.01
Prob(H) (two-sided): 0.00 Kurtosis: 308.71
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.36e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0)
WARNING:tensorflow:Layer lstm_40 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_40 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.02955, saving model to LSTM5.h5 48/48 - 2s - loss: 0.2408 - val_loss: 0.0295 - lr: 0.0010 - 2s/epoch - 46ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0824 - val_loss: 0.0477 - lr: 0.0010 - 505ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0504 - val_loss: 0.4546 - lr: 0.0010 - 527ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0515 - val_loss: 0.0322 - lr: 0.0010 - 497ms/epoch - 10ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0495 - val_loss: 0.3863 - lr: 0.0010 - 521ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0330 - val_loss: 0.1281 - lr: 0.0010 - 470ms/epoch - 10ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0299 - val_loss: 0.1137 - lr: 1.0000e-04 - 527ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0330 - val_loss: 0.1029 - lr: 1.0000e-04 - 539ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0309 - val_loss: 0.1048 - lr: 1.0000e-04 - 537ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0310 - val_loss: 0.0998 - lr: 1.0000e-04 - 507ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0271 - val_loss: 0.0962 - lr: 1.0000e-04 - 501ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0285 - val_loss: 0.0958 - lr: 1.0000e-05 - 495ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0309 - val_loss: 0.0955 - lr: 1.0000e-05 - 531ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0282 - val_loss: 0.0949 - lr: 1.0000e-05 - 473ms/epoch - 10ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0312 - val_loss: 0.0943 - lr: 1.0000e-05 - 530ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0329 - val_loss: 0.0938 - lr: 1.0000e-05 - 533ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0294 - val_loss: 0.0932 - lr: 1.0000e-05 - 524ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0304 - val_loss: 0.0928 - lr: 1.0000e-05 - 493ms/epoch - 10ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0272 - val_loss: 0.0922 - lr: 1.0000e-05 - 509ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0290 - val_loss: 0.0917 - lr: 1.0000e-05 - 496ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0293 - val_loss: 0.0910 - lr: 1.0000e-05 - 511ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0287 - val_loss: 0.0898 - lr: 1.0000e-05 - 540ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0302 - val_loss: 0.0893 - lr: 1.0000e-05 - 503ms/epoch - 10ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0291 - val_loss: 0.0888 - lr: 1.0000e-05 - 526ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0300 - val_loss: 0.0876 - lr: 1.0000e-05 - 495ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0326 - val_loss: 0.0871 - lr: 1.0000e-05 - 522ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0334 - val_loss: 0.0865 - lr: 1.0000e-05 - 497ms/epoch - 10ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0309 - val_loss: 0.0858 - lr: 1.0000e-05 - 504ms/epoch - 10ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0274 - val_loss: 0.0857 - lr: 1.0000e-05 - 485ms/epoch - 10ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0322 - val_loss: 0.0852 - lr: 1.0000e-05 - 499ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0290 - val_loss: 0.0842 - lr: 1.0000e-05 - 540ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0274 - val_loss: 0.0830 - lr: 1.0000e-05 - 521ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0287 - val_loss: 0.0832 - lr: 1.0000e-05 - 518ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0276 - val_loss: 0.0822 - lr: 1.0000e-05 - 529ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0281 - val_loss: 0.0824 - lr: 1.0000e-05 - 477ms/epoch - 10ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0288 - val_loss: 0.0830 - lr: 1.0000e-05 - 529ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0260 - val_loss: 0.0826 - lr: 1.0000e-05 - 505ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0271 - val_loss: 0.0807 - lr: 1.0000e-05 - 509ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0278 - val_loss: 0.0789 - lr: 1.0000e-05 - 492ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0272 - val_loss: 0.0792 - lr: 1.0000e-05 - 503ms/epoch - 10ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0278 - val_loss: 0.0799 - lr: 1.0000e-05 - 514ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0289 - val_loss: 0.0779 - lr: 1.0000e-05 - 544ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0302 - val_loss: 0.0769 - lr: 1.0000e-05 - 561ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0271 - val_loss: 0.0764 - lr: 1.0000e-05 - 523ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0277 - val_loss: 0.0761 - lr: 1.0000e-05 - 517ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0283 - val_loss: 0.0752 - lr: 1.0000e-05 - 513ms/epoch - 11ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0294 - val_loss: 0.0743 - lr: 1.0000e-05 - 483ms/epoch - 10ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0294 - val_loss: 0.0736 - lr: 1.0000e-05 - 508ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.02955 48/48 - 1s - loss: 0.0289 - val_loss: 0.0722 - lr: 1.0000e-05 - 535ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0263 - val_loss: 0.0722 - lr: 1.0000e-05 - 500ms/epoch - 10ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.02955 48/48 - 0s - loss: 0.0268 - val_loss: 0.0715 - lr: 1.0000e-05 - 488ms/epoch - 10ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.831, Time=3.16 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=5.45 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16288.946, Time=9.11 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=7.78 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16226.419, Time=14.02 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-13742.844, Time=10.40 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16101.256, Time=25.43 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17006.489, Time=3.19 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.686, Time=4.21 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17086.654, Time=8.12 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=-16097.512, Time=20.97 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17002.132, Time=4.87 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-17004.011, Time=4.58 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 121.313 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8570.327
Date: Sun, 12 Dec 2021 AIC -17086.654
Time: 14:28:05 BIC -16960.001
Sample: 0 HQIC -17038.014
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.333e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x2 -2.326e-10 9.29e-21 -2.5e+10 0.000 -2.33e-10 -2.33e-10
x3 -2.342e-10 9.32e-21 -2.51e+10 0.000 -2.34e-10 -2.34e-10
x4 1.0000 9.31e-21 1.07e+20 0.000 1.000 1.000
x5 -2.121e-10 8.87e-21 -2.39e+10 0.000 -2.12e-10 -2.12e-10
x6 -8.055e-10 1.64e-20 -4.9e+10 0.000 -8.05e-10 -8.05e-10
x7 -2.312e-10 9.27e-21 -2.49e+10 0.000 -2.31e-10 -2.31e-10
x8 -2.26e-10 9.17e-21 -2.47e+10 0.000 -2.26e-10 -2.26e-10
x9 -1.174e-11 1.86e-21 -6.3e+09 0.000 -1.17e-11 -1.17e-11
x10 -4.486e-11 3.98e-21 -1.13e+10 0.000 -4.49e-11 -4.49e-11
x11 -2.235e-10 9.11e-21 -2.45e+10 0.000 -2.23e-10 -2.23e-10
x12 -2.28e-10 9.21e-21 -2.48e+10 0.000 -2.28e-10 -2.28e-10
x13 -2.332e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x14 -1.78e-09 2.57e-20 -6.92e+10 0.000 -1.78e-09 -1.78e-09
x15 -2.118e-10 8.84e-21 -2.4e+10 0.000 -2.12e-10 -2.12e-10
x16 -5.28e-10 1.4e-20 -3.76e+10 0.000 -5.28e-10 -5.28e-10
x17 -2.173e-10 8.94e-21 -2.43e+10 0.000 -2.17e-10 -2.17e-10
x18 -3.83e-11 3.74e-21 -1.02e+10 0.000 -3.83e-11 -3.83e-11
x19 -2.606e-10 9.86e-21 -2.64e+10 0.000 -2.61e-10 -2.61e-10
x20 -2.433e-10 9.48e-21 -2.57e+10 0.000 -2.43e-10 -2.43e-10
x21 -3.774e-13 1.42e-24 -2.65e+11 0.000 -3.77e-13 -3.77e-13
x22 -1.096e-11 1.35e-24 -8.11e+12 0.000 -1.1e-11 -1.1e-11
ar.L1 -0.4919 1.5e-22 -3.27e+21 0.000 -0.492 -0.492
ar.L2 -0.1922 8.41e-23 -2.28e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.01e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7070 3.34e-22 -2.12e+21 0.000 -0.707 -0.707
sigma2 8.977e-11 6.95e-11 1.291 0.197 -4.65e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.80 Jarque-Bera (JB): 4212163.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.43
Prob(H) (two-sided): 0.00 Kurtosis: 357.21
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.65e+43. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
WARNING:tensorflow:Layer lstm_41 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_41 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.16777, saving model to LSTM5.h5 16/16 - 2s - loss: 0.2985 - val_loss: 0.1678 - lr: 0.0010 - 2s/epoch - 127ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.16777 16/16 - 0s - loss: 0.1674 - val_loss: 0.5109 - lr: 0.0010 - 194ms/epoch - 12ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.16777 16/16 - 0s - loss: 0.0656 - val_loss: 0.2042 - lr: 0.0010 - 193ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.16777 to 0.02118, saving model to LSTM5.h5 16/16 - 0s - loss: 0.0593 - val_loss: 0.0212 - lr: 0.0010 - 244ms/epoch - 15ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.02118 to 0.00970, saving model to LSTM5.h5 16/16 - 0s - loss: 0.0429 - val_loss: 0.0097 - lr: 0.0010 - 244ms/epoch - 15ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0431 - val_loss: 0.0181 - lr: 0.0010 - 199ms/epoch - 12ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0322 - val_loss: 0.0147 - lr: 0.0010 - 207ms/epoch - 13ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0361 - val_loss: 0.0112 - lr: 0.0010 - 194ms/epoch - 12ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0360 - val_loss: 0.0945 - lr: 0.0010 - 180ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00010: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0300 - val_loss: 0.0127 - lr: 0.0010 - 197ms/epoch - 12ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0301 - val_loss: 0.0128 - lr: 1.0000e-04 - 211ms/epoch - 13ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0317 - val_loss: 0.0127 - lr: 1.0000e-04 - 201ms/epoch - 13ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0283 - val_loss: 0.0129 - lr: 1.0000e-04 - 211ms/epoch - 13ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0268 - val_loss: 0.0133 - lr: 1.0000e-04 - 196ms/epoch - 12ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0285 - val_loss: 0.0135 - lr: 1.0000e-04 - 212ms/epoch - 13ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0253 - val_loss: 0.0135 - lr: 1.0000e-05 - 192ms/epoch - 12ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0265 - val_loss: 0.0135 - lr: 1.0000e-05 - 185ms/epoch - 12ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0264 - val_loss: 0.0135 - lr: 1.0000e-05 - 196ms/epoch - 12ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0285 - val_loss: 0.0135 - lr: 1.0000e-05 - 188ms/epoch - 12ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0292 - val_loss: 0.0136 - lr: 1.0000e-05 - 211ms/epoch - 13ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0291 - val_loss: 0.0136 - lr: 1.0000e-05 - 207ms/epoch - 13ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0229 - val_loss: 0.0136 - lr: 1.0000e-05 - 190ms/epoch - 12ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0267 - val_loss: 0.0135 - lr: 1.0000e-05 - 211ms/epoch - 13ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0275 - val_loss: 0.0135 - lr: 1.0000e-05 - 191ms/epoch - 12ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0264 - val_loss: 0.0136 - lr: 1.0000e-05 - 190ms/epoch - 12ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0277 - val_loss: 0.0135 - lr: 1.0000e-05 - 192ms/epoch - 12ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0270 - val_loss: 0.0135 - lr: 1.0000e-05 - 197ms/epoch - 12ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0279 - val_loss: 0.0136 - lr: 1.0000e-05 - 195ms/epoch - 12ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0253 - val_loss: 0.0136 - lr: 1.0000e-05 - 183ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0258 - val_loss: 0.0136 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0263 - val_loss: 0.0137 - lr: 1.0000e-05 - 194ms/epoch - 12ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0314 - val_loss: 0.0136 - lr: 1.0000e-05 - 191ms/epoch - 12ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0265 - val_loss: 0.0135 - lr: 1.0000e-05 - 214ms/epoch - 13ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0272 - val_loss: 0.0135 - lr: 1.0000e-05 - 214ms/epoch - 13ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0280 - val_loss: 0.0135 - lr: 1.0000e-05 - 194ms/epoch - 12ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0281 - val_loss: 0.0136 - lr: 1.0000e-05 - 200ms/epoch - 13ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0283 - val_loss: 0.0136 - lr: 1.0000e-05 - 190ms/epoch - 12ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0252 - val_loss: 0.0135 - lr: 1.0000e-05 - 193ms/epoch - 12ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0272 - val_loss: 0.0136 - lr: 1.0000e-05 - 194ms/epoch - 12ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0273 - val_loss: 0.0135 - lr: 1.0000e-05 - 206ms/epoch - 13ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0264 - val_loss: 0.0136 - lr: 1.0000e-05 - 190ms/epoch - 12ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0282 - val_loss: 0.0136 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0260 - val_loss: 0.0136 - lr: 1.0000e-05 - 215ms/epoch - 13ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0279 - val_loss: 0.0136 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0300 - val_loss: 0.0137 - lr: 1.0000e-05 - 203ms/epoch - 13ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0290 - val_loss: 0.0136 - lr: 1.0000e-05 - 187ms/epoch - 12ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0250 - val_loss: 0.0137 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0256 - val_loss: 0.0137 - lr: 1.0000e-05 - 193ms/epoch - 12ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0287 - val_loss: 0.0137 - lr: 1.0000e-05 - 192ms/epoch - 12ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0253 - val_loss: 0.0137 - lr: 1.0000e-05 - 205ms/epoch - 13ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0263 - val_loss: 0.0136 - lr: 1.0000e-05 - 201ms/epoch - 13ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0242 - val_loss: 0.0136 - lr: 1.0000e-05 - 190ms/epoch - 12ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0253 - val_loss: 0.0136 - lr: 1.0000e-05 - 192ms/epoch - 12ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0243 - val_loss: 0.0135 - lr: 1.0000e-05 - 203ms/epoch - 13ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00970 16/16 - 0s - loss: 0.0283 - val_loss: 0.0135 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 00055: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 72.47565418845511
RMSE: 8.513263427643661
MAPE: 6.94585827976211
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16080.357, Time=14.46 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14973.799, Time=7.59 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15549.629, Time=2.20 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.999, Time=10.66 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16061.924, Time=11.81 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15376.406, Time=18.25 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16186.215, Time=4.40 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15308.706, Time=15.25 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-14920.393, Time=15.78 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16184.203, Time=3.61 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 104.037 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8118.107
Date: Sun, 12 Dec 2021 AIC -16186.215
Time: 14:39:07 BIC -16068.944
Sample: 0 HQIC -16141.178
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -9.919e-15 0.000 -8.4e-11 1.000 -0.000 0.000
x2 3.194e-15 6.3e-05 5.07e-11 1.000 -0.000 0.000
x3 3.066e-15 7.71e-05 3.98e-11 1.000 -0.000 0.000
x4 1.0000 4.4e-05 2.27e+04 0.000 1.000 1.000
x5 -3.977e-15 4.68e-05 -8.49e-11 1.000 -9.18e-05 9.18e-05
x6 -5.906e-17 8.34e-05 -7.08e-13 1.000 -0.000 0.000
x7 -8.726e-15 7.85e-05 -1.11e-10 1.000 -0.000 0.000
x8 0.0014 4.94e-05 27.704 0.000 0.001 0.001
x9 -3.542e-15 0.001 -2.63e-12 1.000 -0.003 0.003
x10 -0.0012 0.001 -1.566 0.117 -0.003 0.000
x11 0.0052 3.01e-05 172.396 0.000 0.005 0.005
x12 -0.0065 0.000 -49.747 0.000 -0.007 -0.006
x13 1.963e-14 7.85e-05 2.5e-10 1.000 -0.000 0.000
x14 -2.134e-14 0.000 -1.01e-10 1.000 -0.000 0.000
x15 3.464e-12 0.000 2.92e-08 1.000 -0.000 0.000
x16 -7.174e-13 6.45e-05 -1.11e-08 1.000 -0.000 0.000
x17 2.537e-13 7.42e-05 3.42e-09 1.000 -0.000 0.000
x18 -2.964e-15 0.000 -7.78e-12 1.000 -0.001 0.001
x19 -3.613e-12 8.67e-05 -4.17e-08 1.000 -0.000 0.000
x20 6.244e-14 0.000 2.1e-10 1.000 -0.001 0.001
x21 -4.242e-16 0.000 -1.47e-12 1.000 -0.001 0.001
x22 -2.128e-15 0.001 -1.74e-12 1.000 -0.002 0.002
ma.L1 -1.3894 4.16e-05 -3.34e+04 0.000 -1.389 -1.389
ma.L2 0.4036 0.000 3637.465 0.000 0.403 0.404
sigma2 1.287e-10 7.27e-11 1.770 0.077 -1.38e-11 2.71e-10
===================================================================================
Ljung-Box (L1) (Q): 69.00 Jarque-Bera (JB): 6269147.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.07
Prob(H) (two-sided): 0.00 Kurtosis: 434.65
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.47e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_42 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_42 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.13812, saving model to LSTM5.h5 17/17 - 2s - loss: 0.5416 - val_loss: 0.1381 - lr: 0.0010 - 2s/epoch - 115ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.2008 - val_loss: 0.5647 - lr: 0.0010 - 187ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0776 - val_loss: 0.5295 - lr: 0.0010 - 193ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0543 - val_loss: 0.4118 - lr: 0.0010 - 196ms/epoch - 12ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0476 - val_loss: 0.1807 - lr: 0.0010 - 192ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0469 - val_loss: 0.1467 - lr: 0.0010 - 208ms/epoch - 12ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0326 - val_loss: 0.1473 - lr: 1.0000e-04 - 192ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0383 - val_loss: 0.1535 - lr: 1.0000e-04 - 188ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0362 - val_loss: 0.1557 - lr: 1.0000e-04 - 194ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0387 - val_loss: 0.1560 - lr: 1.0000e-04 - 215ms/epoch - 13ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0333 - val_loss: 0.1583 - lr: 1.0000e-04 - 204ms/epoch - 12ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0329 - val_loss: 0.1586 - lr: 1.0000e-05 - 209ms/epoch - 12ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0338 - val_loss: 0.1586 - lr: 1.0000e-05 - 200ms/epoch - 12ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0325 - val_loss: 0.1587 - lr: 1.0000e-05 - 211ms/epoch - 12ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0341 - val_loss: 0.1582 - lr: 1.0000e-05 - 216ms/epoch - 13ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0356 - val_loss: 0.1585 - lr: 1.0000e-05 - 189ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0332 - val_loss: 0.1588 - lr: 1.0000e-05 - 192ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0365 - val_loss: 0.1592 - lr: 1.0000e-05 - 202ms/epoch - 12ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0335 - val_loss: 0.1596 - lr: 1.0000e-05 - 209ms/epoch - 12ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0339 - val_loss: 0.1591 - lr: 1.0000e-05 - 227ms/epoch - 13ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0352 - val_loss: 0.1596 - lr: 1.0000e-05 - 211ms/epoch - 12ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0359 - val_loss: 0.1597 - lr: 1.0000e-05 - 194ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0328 - val_loss: 0.1594 - lr: 1.0000e-05 - 200ms/epoch - 12ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0355 - val_loss: 0.1591 - lr: 1.0000e-05 - 212ms/epoch - 12ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0323 - val_loss: 0.1589 - lr: 1.0000e-05 - 215ms/epoch - 13ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0323 - val_loss: 0.1594 - lr: 1.0000e-05 - 178ms/epoch - 10ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0372 - val_loss: 0.1595 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0329 - val_loss: 0.1600 - lr: 1.0000e-05 - 191ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0352 - val_loss: 0.1613 - lr: 1.0000e-05 - 204ms/epoch - 12ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0346 - val_loss: 0.1609 - lr: 1.0000e-05 - 199ms/epoch - 12ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0328 - val_loss: 0.1607 - lr: 1.0000e-05 - 213ms/epoch - 13ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0342 - val_loss: 0.1615 - lr: 1.0000e-05 - 208ms/epoch - 12ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0302 - val_loss: 0.1620 - lr: 1.0000e-05 - 221ms/epoch - 13ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0341 - val_loss: 0.1622 - lr: 1.0000e-05 - 201ms/epoch - 12ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0322 - val_loss: 0.1628 - lr: 1.0000e-05 - 204ms/epoch - 12ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0332 - val_loss: 0.1629 - lr: 1.0000e-05 - 220ms/epoch - 13ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0328 - val_loss: 0.1628 - lr: 1.0000e-05 - 202ms/epoch - 12ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0322 - val_loss: 0.1630 - lr: 1.0000e-05 - 200ms/epoch - 12ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0314 - val_loss: 0.1631 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0331 - val_loss: 0.1638 - lr: 1.0000e-05 - 209ms/epoch - 12ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0305 - val_loss: 0.1633 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0351 - val_loss: 0.1630 - lr: 1.0000e-05 - 189ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0324 - val_loss: 0.1635 - lr: 1.0000e-05 - 201ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0339 - val_loss: 0.1635 - lr: 1.0000e-05 - 218ms/epoch - 13ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0332 - val_loss: 0.1629 - lr: 1.0000e-05 - 198ms/epoch - 12ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0330 - val_loss: 0.1628 - lr: 1.0000e-05 - 210ms/epoch - 12ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0329 - val_loss: 0.1628 - lr: 1.0000e-05 - 212ms/epoch - 12ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0312 - val_loss: 0.1623 - lr: 1.0000e-05 - 188ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0348 - val_loss: 0.1622 - lr: 1.0000e-05 - 206ms/epoch - 12ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0295 - val_loss: 0.1621 - lr: 1.0000e-05 - 212ms/epoch - 12ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.13812 17/17 - 0s - loss: 0.0335 - val_loss: 0.1621 - lr: 1.0000e-05 - 205ms/epoch - 12ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 72.47565418845511
RMSE: 8.513263427643661
MAPE: 6.94585827976211
WMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 29.73090246364654
RMSE: 5.452605107987057
MAPE: 4.390044818690696
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.780, Time=3.04 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=5.38 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15584.877, Time=10.31 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=6.62 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15271.475, Time=10.32 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15128.422, Time=12.20 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16352.675, Time=22.43 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.022, Time=6.37 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.621, Time=3.84 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.445, Time=8.52 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=20.52 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17001.997, Time=4.47 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16996.668, Time=4.81 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 118.849 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.723
Date: Sun, 12 Dec 2021 AIC -17085.445
Time: 14:45:55 BIC -16958.792
Sample: 0 HQIC -17036.805
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 1.36e-20 -2.05e+10 0.000 -2.8e-10 -2.8e-10
x2 -2.817e-10 1.37e-20 -2.06e+10 0.000 -2.82e-10 -2.82e-10
x3 -2.805e-10 1.36e-20 -2.06e+10 0.000 -2.8e-10 -2.8e-10
x4 1.0000 1.37e-20 7.33e+19 0.000 1.000 1.000
x5 -2.598e-10 1.31e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x6 -1.389e-09 2.98e-20 -4.66e+10 0.000 -1.39e-09 -1.39e-09
x7 -2.789e-10 1.36e-20 -2.05e+10 0.000 -2.79e-10 -2.79e-10
x8 -2.761e-10 1.35e-20 -2.04e+10 0.000 -2.76e-10 -2.76e-10
x9 -2.219e-12 3.36e-22 -6.6e+09 0.000 -2.22e-12 -2.22e-12
x10 -1.345e-10 9.37e-21 -1.43e+10 0.000 -1.34e-10 -1.34e-10
x11 -2.899e-10 1.39e-20 -2.09e+10 0.000 -2.9e-10 -2.9e-10
x12 -2.602e-10 1.32e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x13 -2.807e-10 1.36e-20 -2.06e+10 0.000 -2.81e-10 -2.81e-10
x14 -1.87e-09 3.52e-20 -5.31e+10 0.000 -1.87e-09 -1.87e-09
x15 -2.825e-10 1.37e-20 -2.07e+10 0.000 -2.82e-10 -2.82e-10
x16 -8.187e-11 7.33e-21 -1.12e+10 0.000 -8.19e-11 -8.19e-11
x17 -2.441e-10 1.27e-20 -1.92e+10 0.000 -2.44e-10 -2.44e-10
x18 -6.411e-10 2.06e-20 -3.11e+10 0.000 -6.41e-10 -6.41e-10
x19 -2.929e-10 1.39e-20 -2.11e+10 0.000 -2.93e-10 -2.93e-10
x20 -4.339e-10 1.7e-20 -2.56e+10 0.000 -4.34e-10 -4.34e-10
x21 -3.589e-13 2.52e-24 -1.42e+11 0.000 -3.59e-13 -3.59e-13
x22 -1.088e-11 2.36e-24 -4.6e+12 0.000 -1.09e-11 -1.09e-11
ar.L1 -0.4923 1.46e-22 -3.37e+21 0.000 -0.492 -0.492
ar.L2 -0.1923 8.47e-23 -2.27e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.02e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7077 3.31e-22 -2.14e+21 0.000 -0.708 -0.708
sigma2 8.99e-11 6.95e-11 1.293 0.196 -4.64e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 55.15 Jarque-Bera (JB): 4171184.78
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.27
Prob(H) (two-sided): 0.00 Kurtosis: 355.49
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.53e+42. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
WARNING:tensorflow:Layer lstm_43 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_43 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.24371, saving model to LSTM5.h5 10/10 - 2s - loss: 1.3415 - val_loss: 0.2437 - lr: 0.0010 - 2s/epoch - 200ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.24371 to 0.11420, saving model to LSTM5.h5 10/10 - 0s - loss: 0.4349 - val_loss: 0.1142 - lr: 0.0010 - 148ms/epoch - 15ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.2272 - val_loss: 0.3306 - lr: 0.0010 - 123ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0828 - val_loss: 0.5140 - lr: 0.0010 - 127ms/epoch - 13ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0789 - val_loss: 0.3226 - lr: 0.0010 - 130ms/epoch - 13ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0541 - val_loss: 0.2154 - lr: 0.0010 - 132ms/epoch - 13ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0511 - val_loss: 0.1715 - lr: 0.0010 - 131ms/epoch - 13ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0498 - val_loss: 0.1702 - lr: 1.0000e-04 - 128ms/epoch - 13ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0465 - val_loss: 0.1703 - lr: 1.0000e-04 - 146ms/epoch - 15ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0476 - val_loss: 0.1718 - lr: 1.0000e-04 - 132ms/epoch - 13ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0419 - val_loss: 0.1763 - lr: 1.0000e-04 - 124ms/epoch - 12ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0431 - val_loss: 0.1837 - lr: 1.0000e-04 - 124ms/epoch - 12ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0405 - val_loss: 0.1839 - lr: 1.0000e-05 - 126ms/epoch - 13ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0442 - val_loss: 0.1847 - lr: 1.0000e-05 - 128ms/epoch - 13ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0491 - val_loss: 0.1855 - lr: 1.0000e-05 - 137ms/epoch - 14ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0434 - val_loss: 0.1861 - lr: 1.0000e-05 - 149ms/epoch - 15ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0469 - val_loss: 0.1874 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0456 - val_loss: 0.1892 - lr: 1.0000e-05 - 138ms/epoch - 14ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0406 - val_loss: 0.1902 - lr: 1.0000e-05 - 124ms/epoch - 12ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0434 - val_loss: 0.1905 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0408 - val_loss: 0.1909 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0437 - val_loss: 0.1909 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0436 - val_loss: 0.1914 - lr: 1.0000e-05 - 141ms/epoch - 14ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0439 - val_loss: 0.1913 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0431 - val_loss: 0.1909 - lr: 1.0000e-05 - 141ms/epoch - 14ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0499 - val_loss: 0.1903 - lr: 1.0000e-05 - 124ms/epoch - 12ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0433 - val_loss: 0.1895 - lr: 1.0000e-05 - 139ms/epoch - 14ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0468 - val_loss: 0.1892 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0407 - val_loss: 0.1896 - lr: 1.0000e-05 - 139ms/epoch - 14ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0413 - val_loss: 0.1895 - lr: 1.0000e-05 - 149ms/epoch - 15ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0423 - val_loss: 0.1895 - lr: 1.0000e-05 - 130ms/epoch - 13ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0417 - val_loss: 0.1891 - lr: 1.0000e-05 - 131ms/epoch - 13ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0391 - val_loss: 0.1891 - lr: 1.0000e-05 - 124ms/epoch - 12ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0418 - val_loss: 0.1895 - lr: 1.0000e-05 - 135ms/epoch - 13ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0438 - val_loss: 0.1901 - lr: 1.0000e-05 - 130ms/epoch - 13ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0410 - val_loss: 0.1909 - lr: 1.0000e-05 - 127ms/epoch - 13ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0427 - val_loss: 0.1914 - lr: 1.0000e-05 - 131ms/epoch - 13ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0421 - val_loss: 0.1918 - lr: 1.0000e-05 - 132ms/epoch - 13ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0440 - val_loss: 0.1921 - lr: 1.0000e-05 - 135ms/epoch - 13ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0382 - val_loss: 0.1928 - lr: 1.0000e-05 - 119ms/epoch - 12ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0442 - val_loss: 0.1934 - lr: 1.0000e-05 - 130ms/epoch - 13ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0389 - val_loss: 0.1937 - lr: 1.0000e-05 - 125ms/epoch - 12ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0418 - val_loss: 0.1939 - lr: 1.0000e-05 - 143ms/epoch - 14ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0408 - val_loss: 0.1939 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0430 - val_loss: 0.1941 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0409 - val_loss: 0.1952 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0438 - val_loss: 0.1953 - lr: 1.0000e-05 - 124ms/epoch - 12ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0396 - val_loss: 0.1964 - lr: 1.0000e-05 - 137ms/epoch - 14ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0412 - val_loss: 0.1969 - lr: 1.0000e-05 - 132ms/epoch - 13ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0425 - val_loss: 0.1967 - lr: 1.0000e-05 - 131ms/epoch - 13ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0406 - val_loss: 0.1960 - lr: 1.0000e-05 - 126ms/epoch - 13ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.11420 10/10 - 0s - loss: 0.0452 - val_loss: 0.1953 - lr: 1.0000e-05 - 130ms/epoch - 13ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 72.47565418845511
RMSE: 8.513263427643661
MAPE: 6.94585827976211
WMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 29.73090246364654
RMSE: 5.452605107987057
MAPE: 4.390044818690696
DEMA
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 39.142904723518775
RMSE: 6.256429071244936
MAPE: 4.920393911559133
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17059.325, Time=4.84 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=5.40 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16133.019, Time=7.50 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=7.17 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16091.980, Time=9.81 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16009.844, Time=14.86 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-15757.180, Time=11.57 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17029.439, Time=6.31 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17000.917, Time=4.91 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=45.027, Time=6.75 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 79.127 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8554.662
Date: Sun, 12 Dec 2021 AIC -17059.325
Time: 14:57:02 BIC -16942.054
Sample: 0 HQIC -17014.288
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.409e-10 5.52e-21 -2.55e+10 0.000 -1.41e-10 -1.41e-10
x2 -1.378e-10 5.47e-21 -2.52e+10 0.000 -1.38e-10 -1.38e-10
x3 -1.323e-10 5.35e-21 -2.47e+10 0.000 -1.32e-10 -1.32e-10
x4 1.0000 5.41e-21 1.85e+20 0.000 1.000 1.000
x5 -1.221e-10 5.15e-21 -2.37e+10 0.000 -1.22e-10 -1.22e-10
x6 -8.465e-10 1.3e-20 -6.53e+10 0.000 -8.47e-10 -8.47e-10
x7 -1.3e-10 5.32e-21 -2.44e+10 0.000 -1.3e-10 -1.3e-10
x8 -1.267e-10 5.27e-21 -2.41e+10 0.000 -1.27e-10 -1.27e-10
x9 -2.032e-11 6.67e-22 -3.05e+10 0.000 -2.03e-11 -2.03e-11
x10 -5.319e-11 2.3e-21 -2.31e+10 0.000 -5.32e-11 -5.32e-11
x11 -1.275e-10 5.28e-21 -2.42e+10 0.000 -1.28e-10 -1.28e-10
x12 -1.262e-10 5.23e-21 -2.41e+10 0.000 -1.26e-10 -1.26e-10
x13 -1.339e-10 5.39e-21 -2.49e+10 0.000 -1.34e-10 -1.34e-10
x14 -1.092e-09 1.55e-20 -7.06e+10 0.000 -1.09e-09 -1.09e-09
x15 -1.342e-10 5.42e-21 -2.48e+10 0.000 -1.34e-10 -1.34e-10
x16 -2.01e-10 6.63e-21 -3.03e+10 0.000 -2.01e-10 -2.01e-10
x17 -1.144e-10 5.01e-21 -2.29e+10 0.000 -1.14e-10 -1.14e-10
x18 -9.245e-11 4.49e-21 -2.06e+10 0.000 -9.24e-11 -9.24e-11
x19 -1.646e-10 6.01e-21 -2.74e+10 0.000 -1.65e-10 -1.65e-10
x20 -2.482e-10 7.35e-21 -3.37e+10 0.000 -2.48e-10 -2.48e-10
x21 -3.385e-12 3.14e-24 -1.08e+12 0.000 -3.39e-12 -3.39e-12
x22 -8.066e-11 2.47e-23 -3.26e+12 0.000 -8.07e-11 -8.07e-11
ar.L1 -0.2877 2.48e-22 -1.16e+21 0.000 -0.288 -0.288
ma.L1 -0.9134 1.05e-21 -8.7e+20 0.000 -0.913 -0.913
sigma2 9.332e-11 6.96e-11 1.340 0.180 -4.32e-11 2.3e-10
===================================================================================
Ljung-Box (L1) (Q): 84.37 Jarque-Bera (JB): 4308764.36
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.22
Prob(H) (two-sided): 0.00 Kurtosis: 361.26
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.32e+42. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
WARNING:tensorflow:Layer lstm_44 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_44 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.28966, saving model to LSTM5.h5 45/45 - 2s - loss: 0.1649 - val_loss: 0.2897 - lr: 0.0010 - 2s/epoch - 52ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.28966 to 0.03060, saving model to LSTM5.h5 45/45 - 1s - loss: 0.0926 - val_loss: 0.0306 - lr: 0.0010 - 512ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0650 - val_loss: 0.5811 - lr: 0.0010 - 478ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0371 - val_loss: 0.1709 - lr: 0.0010 - 506ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0399 - val_loss: 0.0468 - lr: 0.0010 - 481ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0357 - val_loss: 0.4813 - lr: 0.0010 - 474ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0326 - val_loss: 0.3701 - lr: 0.0010 - 507ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0328 - val_loss: 0.3572 - lr: 1.0000e-04 - 472ms/epoch - 10ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0308 - val_loss: 0.3452 - lr: 1.0000e-04 - 456ms/epoch - 10ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0286 - val_loss: 0.3335 - lr: 1.0000e-04 - 473ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0278 - val_loss: 0.3212 - lr: 1.0000e-04 - 490ms/epoch - 11ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0267 - val_loss: 0.3077 - lr: 1.0000e-04 - 480ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0282 - val_loss: 0.3065 - lr: 1.0000e-05 - 487ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0261 - val_loss: 0.3053 - lr: 1.0000e-05 - 498ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0270 - val_loss: 0.3039 - lr: 1.0000e-05 - 499ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0288 - val_loss: 0.3025 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0274 - val_loss: 0.3012 - lr: 1.0000e-05 - 477ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0295 - val_loss: 0.2997 - lr: 1.0000e-05 - 496ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0262 - val_loss: 0.2985 - lr: 1.0000e-05 - 461ms/epoch - 10ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0265 - val_loss: 0.2971 - lr: 1.0000e-05 - 512ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0295 - val_loss: 0.2956 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0247 - val_loss: 0.2941 - lr: 1.0000e-05 - 508ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0259 - val_loss: 0.2925 - lr: 1.0000e-05 - 492ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0254 - val_loss: 0.2910 - lr: 1.0000e-05 - 525ms/epoch - 12ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0274 - val_loss: 0.2894 - lr: 1.0000e-05 - 502ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0266 - val_loss: 0.2877 - lr: 1.0000e-05 - 500ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0277 - val_loss: 0.2861 - lr: 1.0000e-05 - 494ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0274 - val_loss: 0.2846 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0255 - val_loss: 0.2830 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0270 - val_loss: 0.2815 - lr: 1.0000e-05 - 476ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0260 - val_loss: 0.2803 - lr: 1.0000e-05 - 536ms/epoch - 12ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0274 - val_loss: 0.2788 - lr: 1.0000e-05 - 513ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0254 - val_loss: 0.2769 - lr: 1.0000e-05 - 492ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0282 - val_loss: 0.2754 - lr: 1.0000e-05 - 506ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0261 - val_loss: 0.2735 - lr: 1.0000e-05 - 495ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0254 - val_loss: 0.2720 - lr: 1.0000e-05 - 538ms/epoch - 12ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0262 - val_loss: 0.2705 - lr: 1.0000e-05 - 479ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0242 - val_loss: 0.2690 - lr: 1.0000e-05 - 478ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0282 - val_loss: 0.2675 - lr: 1.0000e-05 - 470ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0261 - val_loss: 0.2658 - lr: 1.0000e-05 - 500ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0292 - val_loss: 0.2638 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0265 - val_loss: 0.2622 - lr: 1.0000e-05 - 510ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0257 - val_loss: 0.2604 - lr: 1.0000e-05 - 480ms/epoch - 11ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0250 - val_loss: 0.2586 - lr: 1.0000e-05 - 523ms/epoch - 12ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0266 - val_loss: 0.2568 - lr: 1.0000e-05 - 505ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0278 - val_loss: 0.2556 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.03060 45/45 - 1s - loss: 0.0267 - val_loss: 0.2537 - lr: 1.0000e-05 - 503ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0234 - val_loss: 0.2524 - lr: 1.0000e-05 - 491ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0244 - val_loss: 0.2504 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0262 - val_loss: 0.2494 - lr: 1.0000e-05 - 498ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0262 - val_loss: 0.2482 - lr: 1.0000e-05 - 462ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.03060 45/45 - 0s - loss: 0.0273 - val_loss: 0.2466 - lr: 1.0000e-05 - 495ms/epoch - 11ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 72.47565418845511
RMSE: 8.513263427643661
MAPE: 6.94585827976211
WMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 29.73090246364654
RMSE: 5.452605107987057
MAPE: 4.390044818690696
DEMA
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 39.142904723518775
RMSE: 6.256429071244936
MAPE: 4.920393911559133
KAMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 52.56428057408519
RMSE: 7.25012279717283
MAPE: 6.170488218753182
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.733, Time=3.39 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.592, Time=5.41 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15587.551, Time=10.45 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.592, Time=7.47 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16365.334, Time=13.07 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16163.760, Time=16.61 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16245.181, Time=17.57 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.017, Time=6.11 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17106.133, Time=7.20 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.425, Time=8.11 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=-17000.553, Time=4.74 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 100.163 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood 8579.066
Date: Sun, 12 Dec 2021 AIC -17106.133
Time: 15:02:53 BIC -16984.171
Sample: 0 HQIC -17059.294
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -3.048e-10 1.69e-20 -1.8e+10 0.000 -3.05e-10 -3.05e-10
x2 -3.042e-10 1.75e-20 -1.74e+10 0.000 -3.04e-10 -3.04e-10
x3 -3.108e-10 1.62e-20 -1.92e+10 0.000 -3.11e-10 -3.11e-10
x4 1.0000 1.69e-20 5.91e+19 0.000 1.000 1.000
x5 -2.767e-10 1.61e-20 -1.72e+10 0.000 -2.77e-10 -2.77e-10
x6 -6.072e-09 1.38e-19 -4.42e+10 0.000 -6.07e-09 -6.07e-09
x7 -2.8e-10 1.62e-20 -1.73e+10 0.000 -2.8e-10 -2.8e-10
x8 -2.792e-10 1.65e-20 -1.69e+10 0.000 -2.79e-10 -2.79e-10
x9 -1.502e-10 1.02e-21 -1.48e+11 0.000 -1.5e-10 -1.5e-10
x10 -2.482e-10 4.3e-21 -5.77e+10 0.000 -2.48e-10 -2.48e-10
x11 -2.764e-10 1.64e-20 -1.69e+10 0.000 -2.76e-10 -2.76e-10
x12 -2.857e-10 1.64e-20 -1.74e+10 0.000 -2.86e-10 -2.86e-10
x13 -2.944e-10 1.66e-20 -1.77e+10 0.000 -2.94e-10 -2.94e-10
x14 -2.403e-09 4.86e-20 -4.95e+10 0.000 -2.4e-09 -2.4e-09
x15 -3.368e-10 1.81e-20 -1.86e+10 0.000 -3.37e-10 -3.37e-10
x16 -2.169e-10 1.45e-20 -1.49e+10 0.000 -2.17e-10 -2.17e-10
x17 -2.124e-10 1.44e-20 -1.47e+10 0.000 -2.12e-10 -2.12e-10
x18 -9.125e-10 2.98e-20 -3.06e+10 0.000 -9.13e-10 -9.13e-10
x19 -3.698e-10 1.9e-20 -1.95e+10 0.000 -3.7e-10 -3.7e-10
x20 -8.9e-10 2.94e-20 -3.03e+10 0.000 -8.9e-10 -8.9e-10
x21 -1.844e-11 1.86e-22 -9.9e+10 0.000 -1.84e-11 -1.84e-11
x22 -2.169e-10 5.04e-22 -4.3e+11 0.000 -2.17e-10 -2.17e-10
ar.L1 -1.2011 7.4e-23 -1.62e+22 0.000 -1.201 -1.201
ar.L2 -0.9017 1.51e-22 -5.98e+21 0.000 -0.902 -0.902
ar.L3 -0.4014 9.48e-23 -4.23e+21 0.000 -0.401 -0.401
sigma2 8.782e-11 6.95e-11 1.264 0.206 -4.84e-11 2.24e-10
===================================================================================
Ljung-Box (L1) (Q): 3.61 Jarque-Bera (JB): 16191.93
Prob(Q): 0.06 Prob(JB): 0.00
Heteroskedasticity (H): 0.35 Skew: 0.59
Prob(H) (two-sided): 0.00 Kurtosis: 24.94
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.23e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_45 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_45 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.01891, saving model to LSTM5.h5 58/58 - 3s - loss: 0.2885 - val_loss: 0.0189 - lr: 0.0010 - 3s/epoch - 44ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.1129 - val_loss: 0.0733 - lr: 0.0010 - 622ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0862 - val_loss: 0.1839 - lr: 0.0010 - 610ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0526 - val_loss: 0.0454 - lr: 0.0010 - 637ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0452 - val_loss: 0.4101 - lr: 0.0010 - 627ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0457 - val_loss: 0.0734 - lr: 0.0010 - 614ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0447 - val_loss: 0.0855 - lr: 1.0000e-04 - 621ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0383 - val_loss: 0.0801 - lr: 1.0000e-04 - 639ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0377 - val_loss: 0.0784 - lr: 1.0000e-04 - 617ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0338 - val_loss: 0.0625 - lr: 1.0000e-04 - 595ms/epoch - 10ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0363 - val_loss: 0.0497 - lr: 1.0000e-04 - 581ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0331 - val_loss: 0.0519 - lr: 1.0000e-05 - 608ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0333 - val_loss: 0.0530 - lr: 1.0000e-05 - 592ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0340 - val_loss: 0.0522 - lr: 1.0000e-05 - 621ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0354 - val_loss: 0.0527 - lr: 1.0000e-05 - 637ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0309 - val_loss: 0.0551 - lr: 1.0000e-05 - 652ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0310 - val_loss: 0.0572 - lr: 1.0000e-05 - 614ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0315 - val_loss: 0.0577 - lr: 1.0000e-05 - 614ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0366 - val_loss: 0.0579 - lr: 1.0000e-05 - 629ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0348 - val_loss: 0.0588 - lr: 1.0000e-05 - 613ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0325 - val_loss: 0.0590 - lr: 1.0000e-05 - 619ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0309 - val_loss: 0.0584 - lr: 1.0000e-05 - 658ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0318 - val_loss: 0.0586 - lr: 1.0000e-05 - 640ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0317 - val_loss: 0.0588 - lr: 1.0000e-05 - 621ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0302 - val_loss: 0.0597 - lr: 1.0000e-05 - 682ms/epoch - 12ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0324 - val_loss: 0.0613 - lr: 1.0000e-05 - 649ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0312 - val_loss: 0.0613 - lr: 1.0000e-05 - 611ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0328 - val_loss: 0.0614 - lr: 1.0000e-05 - 632ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0339 - val_loss: 0.0612 - lr: 1.0000e-05 - 633ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0338 - val_loss: 0.0603 - lr: 1.0000e-05 - 638ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0337 - val_loss: 0.0609 - lr: 1.0000e-05 - 607ms/epoch - 10ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0293 - val_loss: 0.0599 - lr: 1.0000e-05 - 619ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0300 - val_loss: 0.0599 - lr: 1.0000e-05 - 646ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0328 - val_loss: 0.0591 - lr: 1.0000e-05 - 654ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0275 - val_loss: 0.0579 - lr: 1.0000e-05 - 629ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0319 - val_loss: 0.0575 - lr: 1.0000e-05 - 667ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0340 - val_loss: 0.0587 - lr: 1.0000e-05 - 623ms/epoch - 11ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0326 - val_loss: 0.0605 - lr: 1.0000e-05 - 601ms/epoch - 10ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0309 - val_loss: 0.0584 - lr: 1.0000e-05 - 633ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0295 - val_loss: 0.0579 - lr: 1.0000e-05 - 620ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0322 - val_loss: 0.0572 - lr: 1.0000e-05 - 611ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0304 - val_loss: 0.0562 - lr: 1.0000e-05 - 679ms/epoch - 12ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0285 - val_loss: 0.0564 - lr: 1.0000e-05 - 615ms/epoch - 11ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0334 - val_loss: 0.0541 - lr: 1.0000e-05 - 603ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0321 - val_loss: 0.0528 - lr: 1.0000e-05 - 607ms/epoch - 10ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0305 - val_loss: 0.0519 - lr: 1.0000e-05 - 595ms/epoch - 10ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0317 - val_loss: 0.0536 - lr: 1.0000e-05 - 603ms/epoch - 10ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0308 - val_loss: 0.0556 - lr: 1.0000e-05 - 598ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0276 - val_loss: 0.0571 - lr: 1.0000e-05 - 627ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0326 - val_loss: 0.0556 - lr: 1.0000e-05 - 629ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01891 58/58 - 1s - loss: 0.0283 - val_loss: 0.0543 - lr: 1.0000e-05 - 643ms/epoch - 11ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 72.47565418845511
RMSE: 8.513263427643661
MAPE: 6.94585827976211
WMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 29.73090246364654
RMSE: 5.452605107987057
MAPE: 4.390044818690696
DEMA
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 39.142904723518775
RMSE: 6.256429071244936
MAPE: 4.920393911559133
KAMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 52.56428057408519
RMSE: 7.25012279717283
MAPE: 6.170488218753182
MIDPOINT
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 44.21016710593271
RMSE: 6.649072650071791
MAPE: 5.476790088019583
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16954.347, Time=3.12 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14725.736, Time=3.16 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16732.390, Time=11.00 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15913.358, Time=9.36 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16550.077, Time=13.57 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15004.835, Time=12.29 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16027.273, Time=12.90 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16934.995, Time=3.16 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16924.758, Time=4.73 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-16952.347, Time=3.32 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 76.633 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8502.173
Date: Sun, 12 Dec 2021 AIC -16954.347
Time: 15:06:52 BIC -16837.076
Sample: 0 HQIC -16909.310
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 3.409e-14 2.62e-06 1.3e-08 1.000 -5.13e-06 5.13e-06
x2 1.816e-14 2.62e-06 6.93e-09 1.000 -5.13e-06 5.13e-06
x3 -2.039e-15 2.47e-06 -8.26e-10 1.000 -4.84e-06 4.84e-06
x4 1.0000 2.5e-06 4e+05 0.000 1.000 1.000
x5 2.488e-12 2.48e-06 1e-06 1.000 -4.86e-06 4.86e-06
x6 2.84e-15 6.48e-06 4.38e-10 1.000 -1.27e-05 1.27e-05
x7 3.618e-13 3.24e-06 1.12e-07 1.000 -6.36e-06 6.36e-06
x8 -0.0002 4.44e-06 -43.079 0.000 -0.000 -0.000
x9 2.93e-14 6.3e-08 4.65e-07 1.000 -1.23e-07 1.23e-07
x10 -2.843e-05 9.63e-06 -2.951 0.003 -4.73e-05 -9.55e-06
x11 0.0002 3.28e-06 53.981 0.000 0.000 0.000
x12 0.0001 5.63e-06 23.078 0.000 0.000 0.000
x13 -2.595e-14 2.63e-06 -9.88e-09 1.000 -5.15e-06 5.15e-06
x14 -6.497e-14 5.76e-06 -1.13e-08 1.000 -1.13e-05 1.13e-05
x15 1.699e-12 3.08e-06 5.51e-07 1.000 -6.04e-06 6.04e-06
x16 -3.969e-12 4.77e-06 -8.33e-07 1.000 -9.34e-06 9.34e-06
x17 5.452e-12 8.58e-07 6.35e-06 1.000 -1.68e-06 1.68e-06
x18 -3.68e-13 1.33e-05 -2.76e-08 1.000 -2.61e-05 2.61e-05
x19 -5.643e-13 4.61e-06 -1.22e-07 1.000 -9.03e-06 9.03e-06
x20 6.651e-14 4.9e-05 1.36e-09 1.000 -9.61e-05 9.61e-05
x21 -1.76e-16 8.47e-11 -2.08e-06 1.000 -1.66e-10 1.66e-10
x22 -7.82e-16 1.75e-10 -4.47e-06 1.000 -3.43e-10 3.43e-10
ar.L1 -0.2858 5.46e-08 -5.24e+06 0.000 -0.286 -0.286
ma.L1 -0.9143 5.59e-08 -1.63e+07 0.000 -0.914 -0.914
sigma2 1e-10 6.99e-11 1.430 0.153 -3.71e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 84.00 Jarque-Bera (JB): 4822228.07
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -6.05
Prob(H) (two-sided): 0.00 Kurtosis: 381.97
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.54e+27. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
WARNING:tensorflow:Layer lstm_46 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_46 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.03368, saving model to LSTM5.h5 43/43 - 3s - loss: 0.4064 - val_loss: 0.0337 - lr: 0.0010 - 3s/epoch - 65ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.03368 43/43 - 1s - loss: 0.1522 - val_loss: 0.3929 - lr: 0.0010 - 523ms/epoch - 12ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.03368 43/43 - 0s - loss: 0.0562 - val_loss: 0.4695 - lr: 0.0010 - 500ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.03368 43/43 - 1s - loss: 0.0455 - val_loss: 0.0957 - lr: 0.0010 - 504ms/epoch - 12ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.03368 to 0.01773, saving model to LSTM5.h5 43/43 - 0s - loss: 0.0346 - val_loss: 0.0177 - lr: 0.0010 - 494ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0339 - val_loss: 0.1766 - lr: 0.0010 - 473ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0345 - val_loss: 0.0594 - lr: 0.0010 - 476ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0386 - val_loss: 0.0381 - lr: 0.0010 - 473ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0294 - val_loss: 0.0950 - lr: 0.0010 - 466ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00010: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0315 - val_loss: 0.0347 - lr: 0.0010 - 484ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0310 - val_loss: 0.0457 - lr: 1.0000e-04 - 450ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0305 - val_loss: 0.0467 - lr: 1.0000e-04 - 481ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01773 43/43 - 1s - loss: 0.0283 - val_loss: 0.0472 - lr: 1.0000e-04 - 526ms/epoch - 12ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0307 - val_loss: 0.0463 - lr: 1.0000e-04 - 465ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0257 - val_loss: 0.0499 - lr: 1.0000e-04 - 465ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0256 - val_loss: 0.0496 - lr: 1.0000e-05 - 483ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0276 - val_loss: 0.0499 - lr: 1.0000e-05 - 451ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0257 - val_loss: 0.0496 - lr: 1.0000e-05 - 490ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0317 - val_loss: 0.0502 - lr: 1.0000e-05 - 478ms/epoch - 11ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0263 - val_loss: 0.0502 - lr: 1.0000e-05 - 450ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0268 - val_loss: 0.0502 - lr: 1.0000e-05 - 455ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0287 - val_loss: 0.0500 - lr: 1.0000e-05 - 457ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0281 - val_loss: 0.0504 - lr: 1.0000e-05 - 458ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0257 - val_loss: 0.0505 - lr: 1.0000e-05 - 478ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01773 43/43 - 1s - loss: 0.0292 - val_loss: 0.0505 - lr: 1.0000e-05 - 501ms/epoch - 12ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0265 - val_loss: 0.0513 - lr: 1.0000e-05 - 497ms/epoch - 12ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0252 - val_loss: 0.0512 - lr: 1.0000e-05 - 460ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0280 - val_loss: 0.0518 - lr: 1.0000e-05 - 484ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0273 - val_loss: 0.0522 - lr: 1.0000e-05 - 471ms/epoch - 11ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0261 - val_loss: 0.0529 - lr: 1.0000e-05 - 495ms/epoch - 12ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01773 43/43 - 1s - loss: 0.0274 - val_loss: 0.0533 - lr: 1.0000e-05 - 508ms/epoch - 12ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0266 - val_loss: 0.0534 - lr: 1.0000e-05 - 464ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0268 - val_loss: 0.0534 - lr: 1.0000e-05 - 494ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0278 - val_loss: 0.0538 - lr: 1.0000e-05 - 476ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0278 - val_loss: 0.0537 - lr: 1.0000e-05 - 465ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0244 - val_loss: 0.0530 - lr: 1.0000e-05 - 481ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0280 - val_loss: 0.0535 - lr: 1.0000e-05 - 498ms/epoch - 12ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0261 - val_loss: 0.0538 - lr: 1.0000e-05 - 464ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0288 - val_loss: 0.0542 - lr: 1.0000e-05 - 462ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0241 - val_loss: 0.0545 - lr: 1.0000e-05 - 451ms/epoch - 10ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0277 - val_loss: 0.0543 - lr: 1.0000e-05 - 454ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0261 - val_loss: 0.0541 - lr: 1.0000e-05 - 436ms/epoch - 10ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0245 - val_loss: 0.0541 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0261 - val_loss: 0.0545 - lr: 1.0000e-05 - 493ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0261 - val_loss: 0.0531 - lr: 1.0000e-05 - 477ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0273 - val_loss: 0.0529 - lr: 1.0000e-05 - 453ms/epoch - 11ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0294 - val_loss: 0.0526 - lr: 1.0000e-05 - 471ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0257 - val_loss: 0.0536 - lr: 1.0000e-05 - 470ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0296 - val_loss: 0.0539 - lr: 1.0000e-05 - 473ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0292 - val_loss: 0.0538 - lr: 1.0000e-05 - 487ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0294 - val_loss: 0.0543 - lr: 1.0000e-05 - 470ms/epoch - 11ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0251 - val_loss: 0.0552 - lr: 1.0000e-05 - 455ms/epoch - 11ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.01773 43/43 - 1s - loss: 0.0264 - val_loss: 0.0555 - lr: 1.0000e-05 - 501ms/epoch - 12ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0279 - val_loss: 0.0551 - lr: 1.0000e-05 - 468ms/epoch - 11ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.01773 43/43 - 0s - loss: 0.0265 - val_loss: 0.0544 - lr: 1.0000e-05 - 448ms/epoch - 10ms/step Epoch 00055: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 36.387272258848725
RMSE: 6.032186358100081
MAPE: 4.990569235256131
EMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 72.47565418845511
RMSE: 8.513263427643661
MAPE: 6.94585827976211
WMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 29.73090246364654
RMSE: 5.452605107987057
MAPE: 4.390044818690696
DEMA
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 39.142904723518775
RMSE: 6.256429071244936
MAPE: 4.920393911559133
KAMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 52.56428057408519
RMSE: 7.25012279717283
MAPE: 6.170488218753182
MIDPOINT
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 44.21016710593271
RMSE: 6.649072650071791
MAPE: 5.476790088019583
T3
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 64.94642382025489
RMSE: 8.058934409725326
MAPE: 6.415762745110697
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16412.930, Time=13.44 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14867.265, Time=8.15 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15902.803, Time=6.80 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15117.003, Time=9.36 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15669.652, Time=9.45 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-12676.374, Time=11.44 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16418.724, Time=11.14 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15107.772, Time=19.42 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15708.742, Time=22.37 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-13418.641, Time=28.39 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 139.987 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8234.362
Date: Sun, 12 Dec 2021 AIC -16418.724
Time: 15:13:15 BIC -16301.453
Sample: 0 HQIC -16373.687
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.784e-07 0.001 -0.000 1.000 -0.002 0.002
x2 -1.784e-07 0.001 -0.000 1.000 -0.003 0.003
x3 -1.794e-07 0.001 -0.000 1.000 -0.002 0.002
x4 1.0000 0.000 2616.546 0.000 0.999 1.001
x5 -1.704e-07 0.000 -0.000 1.000 -0.001 0.001
x6 -2.858e-07 3.31e-05 -0.009 0.993 -6.52e-05 6.46e-05
x7 -1.754e-07 0.001 -0.000 1.000 -0.002 0.002
x8 0.0007 0.000 3.091 0.002 0.000 0.001
x9 3.313e-08 0.000 9.39e-05 1.000 -0.001 0.001
x10 3.499e-06 0.000 0.022 0.983 -0.000 0.000
x11 -0.0003 0.000 -1.284 0.199 -0.001 0.000
x12 -6.362e-05 0.000 -0.260 0.795 -0.001 0.000
x13 -1.783e-07 0.000 -0.001 0.999 -0.000 0.000
x14 -5.244e-07 0.001 -0.001 0.999 -0.001 0.001
x15 -1.737e-07 0.000 -0.001 0.999 -0.000 0.000
x16 -2.583e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.74e-07 0.000 -0.001 0.999 -0.000 0.000
x18 -5.776e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -1.95e-07 0.000 -0.002 0.999 -0.000 0.000
x20 1.72e-07 0.000 0.001 0.999 -0.000 0.000
x21 -7.548e-10 0.001 -9.93e-07 1.000 -0.001 0.001
x22 -1.194e-08 0.000 -8.47e-05 1.000 -0.000 0.000
ma.L1 -1.3862 1.58e-05 -8.78e+04 0.000 -1.386 -1.386
ma.L2 0.4019 4.28e-05 9396.834 0.000 0.402 0.402
sigma2 1.265e-10 7.58e-11 1.669 0.095 -2.2e-11 2.75e-10
===================================================================================
Ljung-Box (L1) (Q): 66.79 Jarque-Bera (JB): 5900482.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -11.32
Prob(H) (two-sided): 0.00 Kurtosis: 421.81
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.07e+19. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_47 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_47 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.07083, saving model to LSTM5.h5 90/90 - 3s - loss: 0.1074 - val_loss: 0.0708 - lr: 0.0010 - 3s/epoch - 32ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.07083 90/90 - 1s - loss: 0.0851 - val_loss: 0.2284 - lr: 0.0010 - 938ms/epoch - 10ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.07083 90/90 - 1s - loss: 0.0725 - val_loss: 1.2299 - lr: 0.0010 - 975ms/epoch - 11ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.07083 90/90 - 1s - loss: 0.0495 - val_loss: 0.5107 - lr: 0.0010 - 969ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.07083 90/90 - 1s - loss: 0.0374 - val_loss: 0.2034 - lr: 0.0010 - 922ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.07083 to 0.02678, saving model to LSTM5.h5 90/90 - 1s - loss: 0.0358 - val_loss: 0.0268 - lr: 0.0010 - 968ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0717 - val_loss: 0.0848 - lr: 0.0010 - 882ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0303 - val_loss: 0.0579 - lr: 0.0010 - 982ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0315 - val_loss: 0.5213 - lr: 0.0010 - 984ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0323 - val_loss: 0.2960 - lr: 0.0010 - 903ms/epoch - 10ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00011: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0264 - val_loss: 0.1208 - lr: 0.0010 - 907ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0248 - val_loss: 0.1148 - lr: 1.0000e-04 - 970ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0228 - val_loss: 0.1041 - lr: 1.0000e-04 - 903ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0227 - val_loss: 0.1045 - lr: 1.0000e-04 - 901ms/epoch - 10ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0251 - val_loss: 0.1029 - lr: 1.0000e-04 - 970ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00016: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0229 - val_loss: 0.1019 - lr: 1.0000e-04 - 946ms/epoch - 11ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0226 - val_loss: 0.1016 - lr: 1.0000e-05 - 938ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0234 - val_loss: 0.1021 - lr: 1.0000e-05 - 948ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0212 - val_loss: 0.1024 - lr: 1.0000e-05 - 909ms/epoch - 10ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0238 - val_loss: 0.1005 - lr: 1.0000e-05 - 955ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00021: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0235 - val_loss: 0.1004 - lr: 1.0000e-05 - 934ms/epoch - 10ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0229 - val_loss: 0.1012 - lr: 1.0000e-05 - 918ms/epoch - 10ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0217 - val_loss: 0.1007 - lr: 1.0000e-05 - 954ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0201 - val_loss: 0.0996 - lr: 1.0000e-05 - 969ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0220 - val_loss: 0.0993 - lr: 1.0000e-05 - 960ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0229 - val_loss: 0.0995 - lr: 1.0000e-05 - 941ms/epoch - 10ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0210 - val_loss: 0.0994 - lr: 1.0000e-05 - 957ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0237 - val_loss: 0.0990 - lr: 1.0000e-05 - 938ms/epoch - 10ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0218 - val_loss: 0.0973 - lr: 1.0000e-05 - 929ms/epoch - 10ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0243 - val_loss: 0.0971 - lr: 1.0000e-05 - 894ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0231 - val_loss: 0.0976 - lr: 1.0000e-05 - 922ms/epoch - 10ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0234 - val_loss: 0.0994 - lr: 1.0000e-05 - 902ms/epoch - 10ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0220 - val_loss: 0.0994 - lr: 1.0000e-05 - 931ms/epoch - 10ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0206 - val_loss: 0.0983 - lr: 1.0000e-05 - 945ms/epoch - 11ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0231 - val_loss: 0.0971 - lr: 1.0000e-05 - 911ms/epoch - 10ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0223 - val_loss: 0.0985 - lr: 1.0000e-05 - 905ms/epoch - 10ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0201 - val_loss: 0.0993 - lr: 1.0000e-05 - 919ms/epoch - 10ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0212 - val_loss: 0.0992 - lr: 1.0000e-05 - 965ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0211 - val_loss: 0.0995 - lr: 1.0000e-05 - 994ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0212 - val_loss: 0.0989 - lr: 1.0000e-05 - 986ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0246 - val_loss: 0.0985 - lr: 1.0000e-05 - 963ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0228 - val_loss: 0.0986 - lr: 1.0000e-05 - 952ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0226 - val_loss: 0.0978 - lr: 1.0000e-05 - 880ms/epoch - 10ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0220 - val_loss: 0.0978 - lr: 1.0000e-05 - 923ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0216 - val_loss: 0.0987 - lr: 1.0000e-05 - 929ms/epoch - 10ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0210 - val_loss: 0.1003 - lr: 1.0000e-05 - 920ms/epoch - 10ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0215 - val_loss: 0.1010 - lr: 1.0000e-05 - 915ms/epoch - 10ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0238 - val_loss: 0.1011 - lr: 1.0000e-05 - 911ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0224 - val_loss: 0.1010 - lr: 1.0000e-05 - 944ms/epoch - 10ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0252 - val_loss: 0.1000 - lr: 1.0000e-05 - 922ms/epoch - 10ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0199 - val_loss: 0.1015 - lr: 1.0000e-05 - 964ms/epoch - 11ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0222 - val_loss: 0.1016 - lr: 1.0000e-05 - 970ms/epoch - 11ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0215 - val_loss: 0.0989 - lr: 1.0000e-05 - 962ms/epoch - 11ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0218 - val_loss: 0.1016 - lr: 1.0000e-05 - 936ms/epoch - 10ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0226 - val_loss: 0.1009 - lr: 1.0000e-05 - 901ms/epoch - 10ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.02678 90/90 - 1s - loss: 0.0231 - val_loss: 0.0999 - lr: 1.0000e-05 - 974ms/epoch - 11ms/step Epoch 00056: early stopping
SMA Prediction vs Close: 49.63% Accuracy Prediction vs Prediction: 50.75% Accuracy MSE: 36.387272258848725 RMSE: 6.032186358100081 MAPE: 4.990569235256131 EMA Prediction vs Close: 54.48% Accuracy Prediction vs Prediction: 50.37% Accuracy MSE: 72.47565418845511 RMSE: 8.513263427643661 MAPE: 6.94585827976211 WMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 29.73090246364654 RMSE: 5.452605107987057 MAPE: 4.390044818690696 DEMA Prediction vs Close: 50.37% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 39.142904723518775 RMSE: 6.256429071244936 MAPE: 4.920393911559133 KAMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 46.27% Accuracy MSE: 52.56428057408519 RMSE: 7.25012279717283 MAPE: 6.170488218753182 MIDPOINT Prediction vs Close: 50.0% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 44.21016710593271 RMSE: 6.649072650071791 MAPE: 5.476790088019583 T3 Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 64.94642382025489 RMSE: 8.058934409725326 MAPE: 6.415762745110697 TEMA Prediction vs Close: 50.0% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 29.21500753639505 RMSE: 5.4050908906691895 MAPE: 4.44965723634719 Runtime: mins: 56.502691550266654
from google.colab import files
import cv2
uploaded = files.upload()
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fb48eb72350>
with open('simulation5_data.json') as json_file:
simulation5 = json.load(json_file)
fileimg = 'Experiment5'
for i in range(len(list(simulation5.keys()))):
SIM = list(simulation5.keys())[i]
plot_train(simulation5,SIM)
plot_test(simulation5,SIM)
----- Train RMSE for SMA ----- 7.896795928299453 ----- Train_MSE_LSTM for SMA ----- 62.35938593320682 ----- Train MAE LSTM for SMA ----- 6.857639569537322
----- Test RMSE for SMA----- 6.032186358100081 ----- Test_MSE_LSTM for SMA----- 36.387272258848725 ----- Test_MAE_LSTM for SMA----- 4.990569235256131
----- Train RMSE for EMA ----- 9.362436614173118 ----- Train_MSE_LSTM for EMA ----- 87.65521935440941 ----- Train MAE LSTM for EMA ----- 8.20909698434303
----- Test RMSE for EMA----- 8.513263427643661 ----- Test_MSE_LSTM for EMA----- 72.47565418845511 ----- Test_MAE_LSTM for EMA----- 6.94585827976211
----- Train RMSE for WMA ----- 9.66038426972872 ----- Train_MSE_LSTM for WMA ----- 93.3230242388221 ----- Train MAE LSTM for WMA ----- 8.543432257543918
----- Test RMSE for WMA----- 5.452605107987057 ----- Test_MSE_LSTM for WMA----- 29.73090246364654 ----- Test_MAE_LSTM for WMA----- 4.390044818690696
----- Train RMSE for DEMA ----- 10.778650785623652 ----- Train_MSE_LSTM for DEMA ----- 116.17931275842537 ----- Train MAE LSTM for DEMA ----- 9.483069223000225
----- Test RMSE for DEMA----- 6.256429071244936 ----- Test_MSE_LSTM for DEMA----- 39.142904723518775 ----- Test_MAE_LSTM for DEMA----- 4.920393911559133
----- Train RMSE for KAMA ----- 9.224758606883512 ----- Train_MSE_LSTM for KAMA ----- 85.09617135527144 ----- Train MAE LSTM for KAMA ----- 8.391264192349656
----- Test RMSE for KAMA----- 7.25012279717283 ----- Test_MSE_LSTM for KAMA----- 52.56428057408519 ----- Test_MAE_LSTM for KAMA----- 6.170488218753182
----- Train RMSE for MIDPOINT ----- 8.415583524174332 ----- Train_MSE_LSTM for MIDPOINT ----- 70.82204605235448 ----- Train MAE LSTM for MIDPOINT ----- 7.418297143134353
----- Test RMSE for MIDPOINT----- 6.649072650071791 ----- Test_MSE_LSTM for MIDPOINT----- 44.21016710593271 ----- Test_MAE_LSTM for MIDPOINT----- 5.476790088019583
----- Train RMSE for T3 ----- 10.972513364419068 ----- Train_MSE_LSTM for T3 ----- 120.39604953235508 ----- Train MAE LSTM for T3 ----- 9.852330220934686
----- Test RMSE for T3----- 8.058934409725326 ----- Test_MSE_LSTM for T3----- 64.94642382025489 ----- Test_MAE_LSTM for T3----- 6.415762745110697
----- Train RMSE for TEMA ----- 6.651107172535585 ----- Train_MSE_LSTM for TEMA ----- 44.237226620554296 ----- Train MAE LSTM for TEMA ----- 4.542467270827089
----- Test RMSE for TEMA----- 5.4050908906691895 ----- Test_MSE_LSTM for TEMA----- 29.21500753639505 ----- Test_MAE_LSTM for TEMA----- 4.44965723634719
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det =20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# # option 2
model = Sequential()
model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
model.add(Dense(64))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM6.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 3
# define custom activation
#
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation6 = {}
imgfile = 'Experiment6'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation6[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation6_data.json', 'w') as fp:
json.dump(simulation6, fp)
for ma in simulation6.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation6[ma]['final']['mse'],
'\nRMSE:\t', simulation6[ma]['final']['rmse'],
'\nMAPE:\t', simulation6[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-14771.778, Time=13.25 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14135.387, Time=6.10 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15280.870, Time=10.58 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15393.475, Time=9.00 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-14981.217, Time=4.97 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14516.868, Time=13.91 sec
ARIMA(0,3,1)(0,0,0)[0] intercept : AIC=-15663.967, Time=10.04 sec
ARIMA(0,3,0)(0,0,0)[0] intercept : AIC=-13838.679, Time=5.35 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-14734.479, Time=6.37 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-14866.409, Time=7.70 sec
ARIMA(1,3,0)(0,0,0)[0] intercept : AIC=-16157.403, Time=13.81 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-14855.623, Time=11.62 sec
ARIMA(2,3,1)(0,0,0)[0] intercept : AIC=-14720.644, Time=11.60 sec
Best model: ARIMA(1,3,0)(0,0,0)[0] intercept
Total fit time: 124.355 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 0) Log Likelihood 8103.701
Date: Sun, 12 Dec 2021 AIC -16157.403
Time: 18:04:04 BIC -16040.132
Sample: 0 HQIC -16112.366
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept -2.802e-06 7.54e-07 -3.714 0.000 -4.28e-06 -1.32e-06
x1 -2.598e-05 0.001 -0.041 0.967 -0.001 0.001
x2 -2.599e-05 0.001 -0.047 0.963 -0.001 0.001
x3 -2.615e-05 0.001 -0.038 0.970 -0.001 0.001
x4 1.0000 0.001 1507.083 0.000 0.999 1.001
x5 -2.485e-05 0.001 -0.038 0.970 -0.001 0.001
x6 -2.807e-05 3.32e-05 -0.845 0.398 -9.32e-05 3.71e-05
x7 -2.593e-05 8.29e-05 -0.313 0.755 -0.000 0.000
x8 0.0019 7.15e-05 26.753 0.000 0.002 0.002
x9 -1.867e-06 0.001 -0.003 0.998 -0.001 0.001
x10 0.0003 0.000 0.644 0.520 -0.001 0.001
x11 -0.0025 8.93e-05 -28.145 0.000 -0.003 -0.002
x12 0.0015 8.06e-05 18.290 0.000 0.001 0.002
x13 -2.61e-05 0.000 -0.076 0.939 -0.001 0.001
x14 -7.719e-05 0.000 -0.374 0.708 -0.000 0.000
x15 -2.829e-05 8.57e-05 -0.330 0.741 -0.000 0.000
x16 -2.424e-05 0.000 -0.142 0.887 -0.000 0.000
x17 -2.292e-05 9.81e-05 -0.234 0.815 -0.000 0.000
x18 -4.39e-05 0.000 -0.429 0.668 -0.000 0.000
x19 -3.005e-05 0.000 -0.293 0.770 -0.000 0.000
x20 4.559e-05 9.36e-05 0.487 0.626 -0.000 0.000
x21 -7.981e-10 0.001 -9.88e-07 1.000 -0.002 0.002
x22 -1.557e-08 0.000 -0.000 1.000 -0.000 0.000
ar.L1 -0.6667 6.95e-05 -9587.073 0.000 -0.667 -0.667
sigma2 1.314e-10 7.8e-11 1.686 0.092 -2.14e-11 2.84e-10
===================================================================================
Ljung-Box (L1) (Q): 90.59 Jarque-Bera (JB): 3138023.60
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 5.01
Prob(H) (two-sided): 0.00 Kurtosis: 308.71
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.36e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.01892, saving model to LSTM6.h5 48/48 - 7s - loss: 0.1326 - accuracy: 0.0000e+00 - val_loss: 0.0189 - val_accuracy: 0.0037 - lr: 0.0010 - 7s/epoch - 144ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.01892 to 0.00783, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0203 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 0.0010 - 235ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0349 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 0.0010 - 219ms/epoch - 5ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0187 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 0.0010 - 216ms/epoch - 4ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0109 - accuracy: 0.0000e+00 - val_loss: 0.1099 - val_accuracy: 0.0037 - lr: 0.0010 - 219ms/epoch - 5ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0163 - accuracy: 0.0000e+00 - val_loss: 0.0212 - val_accuracy: 0.0037 - lr: 0.0010 - 217ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0078 - accuracy: 0.0000e+00 - val_loss: 0.0930 - val_accuracy: 0.0037 - lr: 0.0010 - 217ms/epoch - 5ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0162 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 221ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 211ms/epoch - 4ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00783 48/48 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 218ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.00783 to 0.00726, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 234ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.00726 to 0.00614, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 241ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss improved from 0.00614 to 0.00554, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 234ms/epoch - 5ms/step Epoch 14/500 Epoch 00014: val_loss improved from 0.00554 to 0.00525, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 232ms/epoch - 5ms/step Epoch 15/500 Epoch 00015: val_loss improved from 0.00525 to 0.00515, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 231ms/epoch - 5ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00515 48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 214ms/epoch - 4ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00515 48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 220ms/epoch - 5ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00515 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 214ms/epoch - 4ms/step Epoch 19/500 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00019: val_loss did not improve from 0.00515 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 225ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00515 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00515 48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.9727e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.9055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 24/500 Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00024: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.8709e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.8475e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.8283e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.8106e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.7934e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.7762e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.7588e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.7410e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.7230e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.7045e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.6856e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.6664e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.6468e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.6269e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.6065e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.5859e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.5649e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.5435e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.5218e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.4998e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.4775e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 4ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.4549e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.4320e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.4088e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.3853e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.3616e-04 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.3376e-04 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 4ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.3134e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.2890e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.2643e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.2394e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.2143e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.1891e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.1636e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 4ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.1380e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.1123e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.0864e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.0604e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.0342e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.00515 48/48 - 0s - loss: 9.0080e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step Epoch 64/500 Epoch 00064: val_loss did not improve from 0.00515 48/48 - 0s - loss: 8.9816e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 4ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.00515 48/48 - 0s - loss: 8.9552e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 00065: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.831, Time=2.77 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.20 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16288.946, Time=6.92 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=5.44 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16226.419, Time=9.82 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-13742.844, Time=8.63 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16101.256, Time=19.83 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17006.489, Time=2.65 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.686, Time=2.98 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17086.654, Time=7.24 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=-16097.512, Time=16.43 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17002.132, Time=3.36 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-17004.011, Time=4.11 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 94.420 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8570.327
Date: Sun, 12 Dec 2021 AIC -17086.654
Time: 18:06:47 BIC -16960.001
Sample: 0 HQIC -17038.014
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.333e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x2 -2.326e-10 9.29e-21 -2.5e+10 0.000 -2.33e-10 -2.33e-10
x3 -2.342e-10 9.32e-21 -2.51e+10 0.000 -2.34e-10 -2.34e-10
x4 1.0000 9.31e-21 1.07e+20 0.000 1.000 1.000
x5 -2.121e-10 8.87e-21 -2.39e+10 0.000 -2.12e-10 -2.12e-10
x6 -8.055e-10 1.64e-20 -4.9e+10 0.000 -8.05e-10 -8.05e-10
x7 -2.312e-10 9.27e-21 -2.49e+10 0.000 -2.31e-10 -2.31e-10
x8 -2.26e-10 9.17e-21 -2.47e+10 0.000 -2.26e-10 -2.26e-10
x9 -1.174e-11 1.86e-21 -6.3e+09 0.000 -1.17e-11 -1.17e-11
x10 -4.486e-11 3.98e-21 -1.13e+10 0.000 -4.49e-11 -4.49e-11
x11 -2.235e-10 9.11e-21 -2.45e+10 0.000 -2.23e-10 -2.23e-10
x12 -2.28e-10 9.21e-21 -2.48e+10 0.000 -2.28e-10 -2.28e-10
x13 -2.332e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x14 -1.78e-09 2.57e-20 -6.92e+10 0.000 -1.78e-09 -1.78e-09
x15 -2.118e-10 8.84e-21 -2.4e+10 0.000 -2.12e-10 -2.12e-10
x16 -5.28e-10 1.4e-20 -3.76e+10 0.000 -5.28e-10 -5.28e-10
x17 -2.173e-10 8.94e-21 -2.43e+10 0.000 -2.17e-10 -2.17e-10
x18 -3.83e-11 3.74e-21 -1.02e+10 0.000 -3.83e-11 -3.83e-11
x19 -2.606e-10 9.86e-21 -2.64e+10 0.000 -2.61e-10 -2.61e-10
x20 -2.433e-10 9.48e-21 -2.57e+10 0.000 -2.43e-10 -2.43e-10
x21 -3.774e-13 1.42e-24 -2.65e+11 0.000 -3.77e-13 -3.77e-13
x22 -1.096e-11 1.35e-24 -8.11e+12 0.000 -1.1e-11 -1.1e-11
ar.L1 -0.4919 1.5e-22 -3.27e+21 0.000 -0.492 -0.492
ar.L2 -0.1922 8.41e-23 -2.28e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.01e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7070 3.34e-22 -2.12e+21 0.000 -0.707 -0.707
sigma2 8.977e-11 6.95e-11 1.291 0.197 -4.65e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.80 Jarque-Bera (JB): 4212163.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.43
Prob(H) (two-sided): 0.00 Kurtosis: 357.21
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.65e+43. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.13086, saving model to LSTM6.h5 16/16 - 4s - loss: 0.1233 - accuracy: 0.0000e+00 - val_loss: 0.1309 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 222ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.13086 to 0.06494, saving model to LSTM6.h5 16/16 - 0s - loss: 0.0611 - accuracy: 0.0000e+00 - val_loss: 0.0649 - val_accuracy: 0.0037 - lr: 0.0010 - 104ms/epoch - 7ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.06494 to 0.00675, saving model to LSTM6.h5 16/16 - 0s - loss: 0.0231 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 0.0010 - 115ms/epoch - 7ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.00675 to 0.00578, saving model to LSTM6.h5 16/16 - 0s - loss: 0.0055 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 114ms/epoch - 7ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00578 16/16 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.00578 to 0.00543, saving model to LSTM6.h5 16/16 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 116ms/epoch - 7ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00543 16/16 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00543 16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 86ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00543 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 0.0010 - 83ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00543 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 0.0010 - 88ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00011: val_loss did not improve from 0.00543 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 0.0010 - 84ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00543 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 91ms/epoch - 6ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.1738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.1280e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 92ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0773e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 103ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00016: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0424e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 101ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0181e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0154e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0128e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0102e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00021: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0076e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00543 16/16 - 0s - loss: 9.0024e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9998e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9970e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9914e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9884e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9855e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9824e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9794e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9762e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9730e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9698e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9665e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9631e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9597e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9563e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9528e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9492e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9456e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9419e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9382e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9345e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9307e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9269e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9230e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9190e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9150e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9110e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9070e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.9028e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.8987e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.8945e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.8902e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00543 16/16 - 0s - loss: 8.8859e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step Epoch 00056: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 43.66% Accuracy
MSE: 58.20305175219876
RMSE: 7.629092459277103
MAPE: 6.21442849961768
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16080.357, Time=11.59 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14973.799, Time=6.15 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15549.629, Time=1.82 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.999, Time=8.46 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16061.924, Time=9.91 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15376.406, Time=14.46 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16186.215, Time=3.67 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15308.706, Time=13.95 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-14920.393, Time=13.47 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16184.203, Time=3.05 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 86.544 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8118.107
Date: Sun, 12 Dec 2021 AIC -16186.215
Time: 18:16:58 BIC -16068.944
Sample: 0 HQIC -16141.178
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -9.919e-15 0.000 -8.4e-11 1.000 -0.000 0.000
x2 3.194e-15 6.3e-05 5.07e-11 1.000 -0.000 0.000
x3 3.066e-15 7.71e-05 3.98e-11 1.000 -0.000 0.000
x4 1.0000 4.4e-05 2.27e+04 0.000 1.000 1.000
x5 -3.977e-15 4.68e-05 -8.49e-11 1.000 -9.18e-05 9.18e-05
x6 -5.906e-17 8.34e-05 -7.08e-13 1.000 -0.000 0.000
x7 -8.726e-15 7.85e-05 -1.11e-10 1.000 -0.000 0.000
x8 0.0014 4.94e-05 27.704 0.000 0.001 0.001
x9 -3.542e-15 0.001 -2.63e-12 1.000 -0.003 0.003
x10 -0.0012 0.001 -1.566 0.117 -0.003 0.000
x11 0.0052 3.01e-05 172.396 0.000 0.005 0.005
x12 -0.0065 0.000 -49.747 0.000 -0.007 -0.006
x13 1.963e-14 7.85e-05 2.5e-10 1.000 -0.000 0.000
x14 -2.134e-14 0.000 -1.01e-10 1.000 -0.000 0.000
x15 3.464e-12 0.000 2.92e-08 1.000 -0.000 0.000
x16 -7.174e-13 6.45e-05 -1.11e-08 1.000 -0.000 0.000
x17 2.537e-13 7.42e-05 3.42e-09 1.000 -0.000 0.000
x18 -2.964e-15 0.000 -7.78e-12 1.000 -0.001 0.001
x19 -3.613e-12 8.67e-05 -4.17e-08 1.000 -0.000 0.000
x20 6.244e-14 0.000 2.1e-10 1.000 -0.001 0.001
x21 -4.242e-16 0.000 -1.47e-12 1.000 -0.001 0.001
x22 -2.128e-15 0.001 -1.74e-12 1.000 -0.002 0.002
ma.L1 -1.3894 4.16e-05 -3.34e+04 0.000 -1.389 -1.389
ma.L2 0.4036 0.000 3637.465 0.000 0.403 0.404
sigma2 1.287e-10 7.27e-11 1.770 0.077 -1.38e-11 2.71e-10
===================================================================================
Ljung-Box (L1) (Q): 69.00 Jarque-Bera (JB): 6269147.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.07
Prob(H) (two-sided): 0.00 Kurtosis: 434.65
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.47e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.00724, saving model to LSTM6.h5 17/17 - 4s - loss: 0.0847 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 234ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.00724 to 0.00667, saving model to LSTM6.h5 17/17 - 0s - loss: 0.0178 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 0.0010 - 110ms/epoch - 6ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.00667 to 0.00479, saving model to LSTM6.h5 17/17 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 129ms/epoch - 8ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00479 17/17 - 0s - loss: 0.0071 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 0.0010 - 95ms/epoch - 6ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.00479 to 0.00468, saving model to LSTM6.h5 17/17 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 114ms/epoch - 7ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0091 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 0.0010 - 99ms/epoch - 6ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0137 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 95ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0303 - accuracy: 0.0000e+00 - val_loss: 0.0959 - val_accuracy: 0.0037 - lr: 0.0010 - 91ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0281 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 0.0010 - 87ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00010: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0102 - accuracy: 0.0000e+00 - val_loss: 0.0545 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0064 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 92ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0237 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 91ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0209 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00468 17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.9588e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.9146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.8725e-04 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.8322e-04 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.7939e-04 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.7573e-04 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.7224e-04 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.6892e-04 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.6575e-04 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.6274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.5986e-04 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.5712e-04 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.5451e-04 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.5202e-04 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.4965e-04 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.4738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00468 17/17 - 0s - loss: 9.4522e-04 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 00055: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 43.66% Accuracy
MSE: 58.20305175219876
RMSE: 7.629092459277103
MAPE: 6.21442849961768
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 70.88350276857014
RMSE: 8.419234096316014
MAPE: 6.6789569931753
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.780, Time=3.03 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.30 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15584.877, Time=8.29 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=6.19 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15271.475, Time=7.68 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15128.422, Time=9.76 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16352.675, Time=18.23 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.022, Time=4.55 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.621, Time=3.05 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.445, Time=6.59 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=15.40 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17001.997, Time=3.28 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16996.668, Time=4.00 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 94.368 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.723
Date: Sun, 12 Dec 2021 AIC -17085.445
Time: 18:22:36 BIC -16958.792
Sample: 0 HQIC -17036.805
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 1.36e-20 -2.05e+10 0.000 -2.8e-10 -2.8e-10
x2 -2.817e-10 1.37e-20 -2.06e+10 0.000 -2.82e-10 -2.82e-10
x3 -2.805e-10 1.36e-20 -2.06e+10 0.000 -2.8e-10 -2.8e-10
x4 1.0000 1.37e-20 7.33e+19 0.000 1.000 1.000
x5 -2.598e-10 1.31e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x6 -1.389e-09 2.98e-20 -4.66e+10 0.000 -1.39e-09 -1.39e-09
x7 -2.789e-10 1.36e-20 -2.05e+10 0.000 -2.79e-10 -2.79e-10
x8 -2.761e-10 1.35e-20 -2.04e+10 0.000 -2.76e-10 -2.76e-10
x9 -2.219e-12 3.36e-22 -6.6e+09 0.000 -2.22e-12 -2.22e-12
x10 -1.345e-10 9.37e-21 -1.43e+10 0.000 -1.34e-10 -1.34e-10
x11 -2.899e-10 1.39e-20 -2.09e+10 0.000 -2.9e-10 -2.9e-10
x12 -2.602e-10 1.32e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x13 -2.807e-10 1.36e-20 -2.06e+10 0.000 -2.81e-10 -2.81e-10
x14 -1.87e-09 3.52e-20 -5.31e+10 0.000 -1.87e-09 -1.87e-09
x15 -2.825e-10 1.37e-20 -2.07e+10 0.000 -2.82e-10 -2.82e-10
x16 -8.187e-11 7.33e-21 -1.12e+10 0.000 -8.19e-11 -8.19e-11
x17 -2.441e-10 1.27e-20 -1.92e+10 0.000 -2.44e-10 -2.44e-10
x18 -6.411e-10 2.06e-20 -3.11e+10 0.000 -6.41e-10 -6.41e-10
x19 -2.929e-10 1.39e-20 -2.11e+10 0.000 -2.93e-10 -2.93e-10
x20 -4.339e-10 1.7e-20 -2.56e+10 0.000 -4.34e-10 -4.34e-10
x21 -3.589e-13 2.52e-24 -1.42e+11 0.000 -3.59e-13 -3.59e-13
x22 -1.088e-11 2.36e-24 -4.6e+12 0.000 -1.09e-11 -1.09e-11
ar.L1 -0.4923 1.46e-22 -3.37e+21 0.000 -0.492 -0.492
ar.L2 -0.1923 8.47e-23 -2.27e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.02e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7077 3.31e-22 -2.14e+21 0.000 -0.708 -0.708
sigma2 8.99e-11 6.95e-11 1.293 0.196 -4.64e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 55.15 Jarque-Bera (JB): 4171184.78
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.27
Prob(H) (two-sided): 0.00 Kurtosis: 355.49
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.53e+42. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.21665, saving model to LSTM6.h5 10/10 - 4s - loss: 0.1989 - accuracy: 0.0000e+00 - val_loss: 0.2167 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 390ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.21665 to 0.00898, saving model to LSTM6.h5 10/10 - 0s - loss: 0.1270 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 0.0010 - 78ms/epoch - 8ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0819 - accuracy: 0.0000e+00 - val_loss: 0.1283 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 65ms/epoch - 7ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0467 - accuracy: 0.0000e+00 - val_loss: 0.0334 - val_accuracy: 0.0037 - lr: 0.0010 - 66ms/epoch - 7ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0154 - accuracy: 0.0000e+00 - val_loss: 0.0434 - val_accuracy: 0.0037 - lr: 0.0010 - 69ms/epoch - 7ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0054 - accuracy: 0.0000e+00 - val_loss: 0.0271 - val_accuracy: 0.0037 - lr: 0.0010 - 68ms/epoch - 7ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0274 - val_accuracy: 0.0037 - lr: 0.0010 - 69ms/epoch - 7ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0284 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 69ms/epoch - 7ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0293 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 70ms/epoch - 7ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0300 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 60ms/epoch - 6ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0304 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 58ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 65ms/epoch - 6ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0316 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0316 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0317 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0317 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0317 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0318 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0318 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0318 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0319 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0319 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0319 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0320 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0320 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0320 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0321 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0321 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0321 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00898 10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0322 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 43.66% Accuracy
MSE: 58.20305175219876
RMSE: 7.629092459277103
MAPE: 6.21442849961768
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 70.88350276857014
RMSE: 8.419234096316014
MAPE: 6.6789569931753
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 119.53246002468391
RMSE: 10.933090140700566
MAPE: 9.747683697911842
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17059.325, Time=3.67 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.33 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16133.019, Time=6.02 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=5.66 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16091.980, Time=7.44 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16009.844, Time=12.43 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-15757.180, Time=8.86 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17029.439, Time=4.41 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17000.917, Time=3.48 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=45.027, Time=4.75 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 61.075 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8554.662
Date: Sun, 12 Dec 2021 AIC -17059.325
Time: 18:32:29 BIC -16942.054
Sample: 0 HQIC -17014.288
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.409e-10 5.52e-21 -2.55e+10 0.000 -1.41e-10 -1.41e-10
x2 -1.378e-10 5.47e-21 -2.52e+10 0.000 -1.38e-10 -1.38e-10
x3 -1.323e-10 5.35e-21 -2.47e+10 0.000 -1.32e-10 -1.32e-10
x4 1.0000 5.41e-21 1.85e+20 0.000 1.000 1.000
x5 -1.221e-10 5.15e-21 -2.37e+10 0.000 -1.22e-10 -1.22e-10
x6 -8.465e-10 1.3e-20 -6.53e+10 0.000 -8.47e-10 -8.47e-10
x7 -1.3e-10 5.32e-21 -2.44e+10 0.000 -1.3e-10 -1.3e-10
x8 -1.267e-10 5.27e-21 -2.41e+10 0.000 -1.27e-10 -1.27e-10
x9 -2.032e-11 6.67e-22 -3.05e+10 0.000 -2.03e-11 -2.03e-11
x10 -5.319e-11 2.3e-21 -2.31e+10 0.000 -5.32e-11 -5.32e-11
x11 -1.275e-10 5.28e-21 -2.42e+10 0.000 -1.28e-10 -1.28e-10
x12 -1.262e-10 5.23e-21 -2.41e+10 0.000 -1.26e-10 -1.26e-10
x13 -1.339e-10 5.39e-21 -2.49e+10 0.000 -1.34e-10 -1.34e-10
x14 -1.092e-09 1.55e-20 -7.06e+10 0.000 -1.09e-09 -1.09e-09
x15 -1.342e-10 5.42e-21 -2.48e+10 0.000 -1.34e-10 -1.34e-10
x16 -2.01e-10 6.63e-21 -3.03e+10 0.000 -2.01e-10 -2.01e-10
x17 -1.144e-10 5.01e-21 -2.29e+10 0.000 -1.14e-10 -1.14e-10
x18 -9.245e-11 4.49e-21 -2.06e+10 0.000 -9.24e-11 -9.24e-11
x19 -1.646e-10 6.01e-21 -2.74e+10 0.000 -1.65e-10 -1.65e-10
x20 -2.482e-10 7.35e-21 -3.37e+10 0.000 -2.48e-10 -2.48e-10
x21 -3.385e-12 3.14e-24 -1.08e+12 0.000 -3.39e-12 -3.39e-12
x22 -8.066e-11 2.47e-23 -3.26e+12 0.000 -8.07e-11 -8.07e-11
ar.L1 -0.2877 2.48e-22 -1.16e+21 0.000 -0.288 -0.288
ma.L1 -0.9134 1.05e-21 -8.7e+20 0.000 -0.913 -0.913
sigma2 9.332e-11 6.96e-11 1.340 0.180 -4.32e-11 2.3e-10
===================================================================================
Ljung-Box (L1) (Q): 84.37 Jarque-Bera (JB): 4308764.36
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.22
Prob(H) (two-sided): 0.00 Kurtosis: 361.26
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.32e+42. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.15069, saving model to LSTM6.h5 45/45 - 4s - loss: 0.1458 - accuracy: 0.0000e+00 - val_loss: 0.1507 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 82ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.15069 to 0.00809, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0958 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 220ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00809 45/45 - 0s - loss: 0.0271 - accuracy: 0.0000e+00 - val_loss: 0.1124 - val_accuracy: 0.0037 - lr: 0.0010 - 208ms/epoch - 5ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00809 45/45 - 0s - loss: 0.0220 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 0.0010 - 209ms/epoch - 5ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00809 45/45 - 0s - loss: 0.0063 - accuracy: 0.0000e+00 - val_loss: 0.0540 - val_accuracy: 0.0037 - lr: 0.0010 - 212ms/epoch - 5ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.00809 to 0.00341, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0078 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 0.0010 - 220ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0291 - val_accuracy: 0.0037 - lr: 0.0010 - 205ms/epoch - 5ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 0.0010 - 211ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 0.0010 - 210ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 206ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00011: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 0.0010 - 206ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0054 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 206ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 208ms/epoch - 5ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 206ms/epoch - 5ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 206ms/epoch - 5ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00016: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 211ms/epoch - 5ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 199ms/epoch - 4ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00021: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00341 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step Epoch 43/500 Epoch 00043: val_loss improved from 0.00341 to 0.00341, saving model to LSTM6.h5 45/45 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 543ms/epoch - 12ms/step Epoch 44/500 Epoch 00044: val_loss improved from 0.00341 to 0.00340, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step Epoch 45/500 Epoch 00045: val_loss improved from 0.00340 to 0.00339, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 46/500 Epoch 00046: val_loss improved from 0.00339 to 0.00339, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 47/500 Epoch 00047: val_loss improved from 0.00339 to 0.00338, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step Epoch 48/500 Epoch 00048: val_loss improved from 0.00338 to 0.00337, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step Epoch 49/500 Epoch 00049: val_loss improved from 0.00337 to 0.00336, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step Epoch 50/500 Epoch 00050: val_loss improved from 0.00336 to 0.00335, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step Epoch 51/500 Epoch 00051: val_loss improved from 0.00335 to 0.00335, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step Epoch 52/500 Epoch 00052: val_loss improved from 0.00335 to 0.00334, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss improved from 0.00334 to 0.00333, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step Epoch 54/500 Epoch 00054: val_loss improved from 0.00333 to 0.00333, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 55/500 Epoch 00055: val_loss improved from 0.00333 to 0.00332, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step Epoch 56/500 Epoch 00056: val_loss improved from 0.00332 to 0.00332, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step Epoch 57/500 Epoch 00057: val_loss improved from 0.00332 to 0.00331, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step Epoch 58/500 Epoch 00058: val_loss improved from 0.00331 to 0.00331, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step Epoch 59/500 Epoch 00059: val_loss improved from 0.00331 to 0.00331, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step Epoch 60/500 Epoch 00060: val_loss improved from 0.00331 to 0.00331, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step Epoch 61/500 Epoch 00061: val_loss improved from 0.00331 to 0.00330, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step Epoch 62/500 Epoch 00062: val_loss improved from 0.00330 to 0.00330, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step Epoch 63/500 Epoch 00063: val_loss improved from 0.00330 to 0.00330, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 64/500 Epoch 00064: val_loss improved from 0.00330 to 0.00330, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step Epoch 66/500 Epoch 00066: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 67/500 Epoch 00067: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step Epoch 68/500 Epoch 00068: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step Epoch 69/500 Epoch 00069: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 70/500 Epoch 00070: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 200ms/epoch - 4ms/step Epoch 71/500 Epoch 00071: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step Epoch 72/500 Epoch 00072: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step Epoch 73/500 Epoch 00073: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step Epoch 74/500 Epoch 00074: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step Epoch 75/500 Epoch 00075: val_loss did not improve from 0.00330 45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 76/500 Epoch 00076: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.9705e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 77/500 Epoch 00077: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.9169e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step Epoch 78/500 Epoch 00078: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.8636e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 79/500 Epoch 00079: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.8106e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step Epoch 80/500 Epoch 00080: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.7579e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 81/500 Epoch 00081: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.7056e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step Epoch 82/500 Epoch 00082: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.6537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 83/500 Epoch 00083: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.6023e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step Epoch 84/500 Epoch 00084: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.5512e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step Epoch 85/500 Epoch 00085: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.5006e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step Epoch 86/500 Epoch 00086: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.4505e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 87/500 Epoch 00087: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.4009e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step Epoch 88/500 Epoch 00088: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.3518e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step Epoch 89/500 Epoch 00089: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.3032e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step Epoch 90/500 Epoch 00090: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.2551e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step Epoch 91/500 Epoch 00091: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.2076e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step Epoch 92/500 Epoch 00092: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.1606e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 93/500 Epoch 00093: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.1142e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 94/500 Epoch 00094: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.0684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 95/500 Epoch 00095: val_loss did not improve from 0.00330 45/45 - 0s - loss: 9.0232e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step Epoch 96/500 Epoch 00096: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.9786e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step Epoch 97/500 Epoch 00097: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.9346e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 98/500 Epoch 00098: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.8912e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 99/500 Epoch 00099: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.8485e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 100/500 Epoch 00100: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.8064e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step Epoch 101/500 Epoch 00101: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.7649e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step Epoch 102/500 Epoch 00102: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.7241e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 103/500 Epoch 00103: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.6839e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step Epoch 104/500 Epoch 00104: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.6444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 105/500 Epoch 00105: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.6055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step Epoch 106/500 Epoch 00106: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.5673e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step Epoch 107/500 Epoch 00107: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.5297e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step Epoch 108/500 Epoch 00108: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.4928e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 109/500 Epoch 00109: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.4565e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step Epoch 110/500 Epoch 00110: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.4208e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step Epoch 111/500 Epoch 00111: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.3858e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 112/500 Epoch 00112: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.3515e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step Epoch 113/500 Epoch 00113: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.3177e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 200ms/epoch - 4ms/step Epoch 114/500 Epoch 00114: val_loss did not improve from 0.00330 45/45 - 0s - loss: 8.2846e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step Epoch 00114: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 43.66% Accuracy
MSE: 58.20305175219876
RMSE: 7.629092459277103
MAPE: 6.21442849961768
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 70.88350276857014
RMSE: 8.419234096316014
MAPE: 6.6789569931753
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 119.53246002468391
RMSE: 10.933090140700566
MAPE: 9.747683697911842
KAMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 61.13308833987969
RMSE: 7.818765141624327
MAPE: 6.461585168646619
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.733, Time=2.82 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.592, Time=4.28 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15587.551, Time=7.81 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.592, Time=6.26 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16365.334, Time=10.33 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16163.760, Time=12.75 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16245.181, Time=15.13 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.017, Time=5.19 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17106.133, Time=6.00 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.425, Time=6.74 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=-17000.553, Time=3.99 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 81.307 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood 8579.066
Date: Sun, 12 Dec 2021 AIC -17106.133
Time: 18:37:13 BIC -16984.171
Sample: 0 HQIC -17059.294
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -3.048e-10 1.69e-20 -1.8e+10 0.000 -3.05e-10 -3.05e-10
x2 -3.042e-10 1.75e-20 -1.74e+10 0.000 -3.04e-10 -3.04e-10
x3 -3.108e-10 1.62e-20 -1.92e+10 0.000 -3.11e-10 -3.11e-10
x4 1.0000 1.69e-20 5.91e+19 0.000 1.000 1.000
x5 -2.767e-10 1.61e-20 -1.72e+10 0.000 -2.77e-10 -2.77e-10
x6 -6.072e-09 1.38e-19 -4.42e+10 0.000 -6.07e-09 -6.07e-09
x7 -2.8e-10 1.62e-20 -1.73e+10 0.000 -2.8e-10 -2.8e-10
x8 -2.792e-10 1.65e-20 -1.69e+10 0.000 -2.79e-10 -2.79e-10
x9 -1.502e-10 1.02e-21 -1.48e+11 0.000 -1.5e-10 -1.5e-10
x10 -2.482e-10 4.3e-21 -5.77e+10 0.000 -2.48e-10 -2.48e-10
x11 -2.764e-10 1.64e-20 -1.69e+10 0.000 -2.76e-10 -2.76e-10
x12 -2.857e-10 1.64e-20 -1.74e+10 0.000 -2.86e-10 -2.86e-10
x13 -2.944e-10 1.66e-20 -1.77e+10 0.000 -2.94e-10 -2.94e-10
x14 -2.403e-09 4.86e-20 -4.95e+10 0.000 -2.4e-09 -2.4e-09
x15 -3.368e-10 1.81e-20 -1.86e+10 0.000 -3.37e-10 -3.37e-10
x16 -2.169e-10 1.45e-20 -1.49e+10 0.000 -2.17e-10 -2.17e-10
x17 -2.124e-10 1.44e-20 -1.47e+10 0.000 -2.12e-10 -2.12e-10
x18 -9.125e-10 2.98e-20 -3.06e+10 0.000 -9.13e-10 -9.13e-10
x19 -3.698e-10 1.9e-20 -1.95e+10 0.000 -3.7e-10 -3.7e-10
x20 -8.9e-10 2.94e-20 -3.03e+10 0.000 -8.9e-10 -8.9e-10
x21 -1.844e-11 1.86e-22 -9.9e+10 0.000 -1.84e-11 -1.84e-11
x22 -2.169e-10 5.04e-22 -4.3e+11 0.000 -2.17e-10 -2.17e-10
ar.L1 -1.2011 7.4e-23 -1.62e+22 0.000 -1.201 -1.201
ar.L2 -0.9017 1.51e-22 -5.98e+21 0.000 -0.902 -0.902
ar.L3 -0.4014 9.48e-23 -4.23e+21 0.000 -0.401 -0.401
sigma2 8.782e-11 6.95e-11 1.264 0.206 -4.84e-11 2.24e-10
===================================================================================
Ljung-Box (L1) (Q): 3.61 Jarque-Bera (JB): 16191.93
Prob(Q): 0.06 Prob(JB): 0.00
Heteroskedasticity (H): 0.35 Skew: 0.59
Prob(H) (two-sided): 0.00 Kurtosis: 24.94
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.23e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.12328, saving model to LSTM6.h5 58/58 - 4s - loss: 0.1831 - accuracy: 0.0000e+00 - val_loss: 0.1233 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 65ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.12328 to 0.00640, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0502 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 284ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00640 58/58 - 0s - loss: 0.0100 - accuracy: 0.0000e+00 - val_loss: 0.0400 - val_accuracy: 0.0037 - lr: 0.0010 - 255ms/epoch - 4ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00640 58/58 - 0s - loss: 0.0156 - accuracy: 0.0000e+00 - val_loss: 0.0123 - val_accuracy: 0.0037 - lr: 0.0010 - 270ms/epoch - 5ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00640 58/58 - 0s - loss: 0.0069 - accuracy: 0.0000e+00 - val_loss: 0.0534 - val_accuracy: 0.0037 - lr: 0.0010 - 279ms/epoch - 5ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00640 58/58 - 0s - loss: 0.0139 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 266ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.00640 58/58 - 0s - loss: 0.0069 - accuracy: 0.0000e+00 - val_loss: 0.0328 - val_accuracy: 0.0037 - lr: 0.0010 - 259ms/epoch - 4ms/step Epoch 8/500 Epoch 00008: val_loss improved from 0.00640 to 0.00547, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0213 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 298ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss improved from 0.00547 to 0.00539, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 299ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss improved from 0.00539 to 0.00500, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 278ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.00500 to 0.00486, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 296ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.00486 to 0.00482, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 293ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00482 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 261ms/epoch - 5ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00482 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 261ms/epoch - 5ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00482 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 277ms/epoch - 5ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00016: val_loss did not improve from 0.00482 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 278ms/epoch - 5ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00482 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.7738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.6520e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.6009e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00021: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.5750e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.5583e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.5451e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.5333e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.5220e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.5107e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4994e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4879e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4761e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4640e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4517e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4390e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4261e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.4128e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3993e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3854e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3711e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3566e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3417e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3265e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.3110e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.2951e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.2789e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.2623e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.2454e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.2282e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.2106e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.1927e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.1744e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.1558e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.1369e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.1177e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.0981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.0783e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.0581e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.0376e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00482 58/58 - 0s - loss: 9.0169e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00482 58/58 - 0s - loss: 8.9959e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00482 58/58 - 0s - loss: 8.9745e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00482 58/58 - 0s - loss: 8.9530e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.00482 58/58 - 0s - loss: 8.9312e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.00482 58/58 - 0s - loss: 8.9091e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step Epoch 00062: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 43.66% Accuracy
MSE: 58.20305175219876
RMSE: 7.629092459277103
MAPE: 6.21442849961768
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 70.88350276857014
RMSE: 8.419234096316014
MAPE: 6.6789569931753
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 119.53246002468391
RMSE: 10.933090140700566
MAPE: 9.747683697911842
KAMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 61.13308833987969
RMSE: 7.818765141624327
MAPE: 6.461585168646619
MIDPOINT
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 61.5384692642518
RMSE: 7.8446458979517875
MAPE: 6.407298993379305
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16954.347, Time=2.33 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14725.736, Time=2.40 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16732.390, Time=7.97 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15913.358, Time=6.88 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16550.077, Time=10.20 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15004.835, Time=9.37 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16027.273, Time=9.76 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16934.995, Time=2.32 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16924.758, Time=3.64 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-16952.347, Time=2.30 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 57.183 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8502.173
Date: Sun, 12 Dec 2021 AIC -16954.347
Time: 18:40:27 BIC -16837.076
Sample: 0 HQIC -16909.310
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 3.409e-14 2.62e-06 1.3e-08 1.000 -5.13e-06 5.13e-06
x2 1.816e-14 2.62e-06 6.93e-09 1.000 -5.13e-06 5.13e-06
x3 -2.039e-15 2.47e-06 -8.26e-10 1.000 -4.84e-06 4.84e-06
x4 1.0000 2.5e-06 4e+05 0.000 1.000 1.000
x5 2.488e-12 2.48e-06 1e-06 1.000 -4.86e-06 4.86e-06
x6 2.84e-15 6.48e-06 4.38e-10 1.000 -1.27e-05 1.27e-05
x7 3.618e-13 3.24e-06 1.12e-07 1.000 -6.36e-06 6.36e-06
x8 -0.0002 4.44e-06 -43.079 0.000 -0.000 -0.000
x9 2.93e-14 6.3e-08 4.65e-07 1.000 -1.23e-07 1.23e-07
x10 -2.843e-05 9.63e-06 -2.951 0.003 -4.73e-05 -9.55e-06
x11 0.0002 3.28e-06 53.981 0.000 0.000 0.000
x12 0.0001 5.63e-06 23.078 0.000 0.000 0.000
x13 -2.595e-14 2.63e-06 -9.88e-09 1.000 -5.15e-06 5.15e-06
x14 -6.497e-14 5.76e-06 -1.13e-08 1.000 -1.13e-05 1.13e-05
x15 1.699e-12 3.08e-06 5.51e-07 1.000 -6.04e-06 6.04e-06
x16 -3.969e-12 4.77e-06 -8.33e-07 1.000 -9.34e-06 9.34e-06
x17 5.452e-12 8.58e-07 6.35e-06 1.000 -1.68e-06 1.68e-06
x18 -3.68e-13 1.33e-05 -2.76e-08 1.000 -2.61e-05 2.61e-05
x19 -5.643e-13 4.61e-06 -1.22e-07 1.000 -9.03e-06 9.03e-06
x20 6.651e-14 4.9e-05 1.36e-09 1.000 -9.61e-05 9.61e-05
x21 -1.76e-16 8.47e-11 -2.08e-06 1.000 -1.66e-10 1.66e-10
x22 -7.82e-16 1.75e-10 -4.47e-06 1.000 -3.43e-10 3.43e-10
ar.L1 -0.2858 5.46e-08 -5.24e+06 0.000 -0.286 -0.286
ma.L1 -0.9143 5.59e-08 -1.63e+07 0.000 -0.914 -0.914
sigma2 1e-10 6.99e-11 1.430 0.153 -3.71e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 84.00 Jarque-Bera (JB): 4822228.07
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -6.05
Prob(H) (two-sided): 0.00 Kurtosis: 381.97
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.54e+27. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.09054, saving model to LSTM6.h5 43/43 - 4s - loss: 0.1402 - accuracy: 0.0000e+00 - val_loss: 0.0905 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 86ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.09054 to 0.00550, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0382 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 232ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00550 43/43 - 0s - loss: 0.0291 - accuracy: 0.0000e+00 - val_loss: 0.1315 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 206ms/epoch - 5ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00550 43/43 - 0s - loss: 0.0341 - accuracy: 0.0000e+00 - val_loss: 0.0183 - val_accuracy: 0.0037 - lr: 0.0010 - 212ms/epoch - 5ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00550 43/43 - 0s - loss: 0.0166 - accuracy: 0.0000e+00 - val_loss: 0.1407 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 219ms/epoch - 5ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.00550 to 0.00397, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0172 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 0.0010 - 231ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.0553 - val_accuracy: 0.0037 - lr: 0.0010 - 203ms/epoch - 5ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0073 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 0.0010 - 198ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0323 - val_accuracy: 0.0037 - lr: 0.0010 - 196ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 0.0010 - 211ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00011: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0272 - val_accuracy: 0.0037 - lr: 0.0010 - 221ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0070 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 194ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 198ms/epoch - 5ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 196ms/epoch - 5ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 220ms/epoch - 5ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00016: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 216ms/epoch - 5ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00021: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00397 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step Epoch 00056: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 60.485697397526344
RMSE: 7.777255132598284
MAPE: 6.358945125308518
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 43.66% Accuracy
MSE: 58.20305175219876
RMSE: 7.629092459277103
MAPE: 6.21442849961768
WMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 70.88350276857014
RMSE: 8.419234096316014
MAPE: 6.6789569931753
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 119.53246002468391
RMSE: 10.933090140700566
MAPE: 9.747683697911842
KAMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 61.13308833987969
RMSE: 7.818765141624327
MAPE: 6.461585168646619
MIDPOINT
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 61.5384692642518
RMSE: 7.8446458979517875
MAPE: 6.407298993379305
T3
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 163.02597008234568
RMSE: 12.768162361214932
MAPE: 10.498544939048504
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16412.930, Time=10.23 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14867.265, Time=6.41 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15902.803, Time=5.45 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15117.003, Time=6.98 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15669.652, Time=7.65 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-12676.374, Time=8.53 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16418.724, Time=9.53 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15107.772, Time=12.50 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15708.742, Time=15.20 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-13418.641, Time=23.25 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 105.756 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8234.362
Date: Sun, 12 Dec 2021 AIC -16418.724
Time: 18:45:30 BIC -16301.453
Sample: 0 HQIC -16373.687
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.784e-07 0.001 -0.000 1.000 -0.002 0.002
x2 -1.784e-07 0.001 -0.000 1.000 -0.003 0.003
x3 -1.794e-07 0.001 -0.000 1.000 -0.002 0.002
x4 1.0000 0.000 2616.546 0.000 0.999 1.001
x5 -1.704e-07 0.000 -0.000 1.000 -0.001 0.001
x6 -2.858e-07 3.31e-05 -0.009 0.993 -6.52e-05 6.46e-05
x7 -1.754e-07 0.001 -0.000 1.000 -0.002 0.002
x8 0.0007 0.000 3.091 0.002 0.000 0.001
x9 3.313e-08 0.000 9.39e-05 1.000 -0.001 0.001
x10 3.499e-06 0.000 0.022 0.983 -0.000 0.000
x11 -0.0003 0.000 -1.284 0.199 -0.001 0.000
x12 -6.362e-05 0.000 -0.260 0.795 -0.001 0.000
x13 -1.783e-07 0.000 -0.001 0.999 -0.000 0.000
x14 -5.244e-07 0.001 -0.001 0.999 -0.001 0.001
x15 -1.737e-07 0.000 -0.001 0.999 -0.000 0.000
x16 -2.583e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.74e-07 0.000 -0.001 0.999 -0.000 0.000
x18 -5.776e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -1.95e-07 0.000 -0.002 0.999 -0.000 0.000
x20 1.72e-07 0.000 0.001 0.999 -0.000 0.000
x21 -7.548e-10 0.001 -9.93e-07 1.000 -0.001 0.001
x22 -1.194e-08 0.000 -8.47e-05 1.000 -0.000 0.000
ma.L1 -1.3862 1.58e-05 -8.78e+04 0.000 -1.386 -1.386
ma.L2 0.4019 4.28e-05 9396.834 0.000 0.402 0.402
sigma2 1.265e-10 7.58e-11 1.669 0.095 -2.2e-11 2.75e-10
===================================================================================
Ljung-Box (L1) (Q): 66.79 Jarque-Bera (JB): 5900482.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -11.32
Prob(H) (two-sided): 0.00 Kurtosis: 421.81
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.07e+19. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.10823, saving model to LSTM6.h5 90/90 - 4s - loss: 0.1248 - accuracy: 0.0000e+00 - val_loss: 0.1082 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 46ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.10823 to 0.01063, saving model to LSTM6.h5 90/90 - 0s - loss: 0.0325 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 0.0010 - 409ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.01063 90/90 - 0s - loss: 0.0327 - accuracy: 0.0000e+00 - val_loss: 0.0371 - val_accuracy: 0.0037 - lr: 0.0010 - 397ms/epoch - 4ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.01063 to 0.00919, saving model to LSTM6.h5 90/90 - 0s - loss: 0.0261 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 0.0010 - 413ms/epoch - 5ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00919 90/90 - 0s - loss: 0.0209 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 0.0010 - 390ms/epoch - 4ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.00919 to 0.00721, saving model to LSTM6.h5 90/90 - 0s - loss: 0.0180 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 0.0010 - 423ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00721 90/90 - 0s - loss: 0.0141 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 406ms/epoch - 5ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00721 90/90 - 0s - loss: 0.0121 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 0.0010 - 390ms/epoch - 4ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00721 90/90 - 0s - loss: 0.0108 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 0.0010 - 398ms/epoch - 4ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00721 90/90 - 0s - loss: 0.0104 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 0.0010 - 392ms/epoch - 4ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00011: val_loss did not improve from 0.00721 90/90 - 0s - loss: 0.0103 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 0.0010 - 400ms/epoch - 4ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.00721 to 0.00668, saving model to LSTM6.h5 90/90 - 0s - loss: 0.0150 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 398ms/epoch - 4ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 394ms/epoch - 4ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 381ms/epoch - 4ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 388ms/epoch - 4ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 390ms/epoch - 4ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00017: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 394ms/epoch - 4ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step Epoch 22/500 Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00022: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 385ms/epoch - 4ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 383ms/epoch - 4ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 390ms/epoch - 4ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 390ms/epoch - 4ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 383ms/epoch - 4ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0158 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 386ms/epoch - 4ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0171 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00668 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0174 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 385ms/epoch - 4ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.9880e-04 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.9158e-04 - accuracy: 0.0000e+00 - val_loss: 0.0179 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.8459e-04 - accuracy: 0.0000e+00 - val_loss: 0.0182 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.7781e-04 - accuracy: 0.0000e+00 - val_loss: 0.0185 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.7124e-04 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.6490e-04 - accuracy: 0.0000e+00 - val_loss: 0.0192 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.5877e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.5285e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 385ms/epoch - 4ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.4714e-04 - accuracy: 0.0000e+00 - val_loss: 0.0201 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 397ms/epoch - 4ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.4163e-04 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.3632e-04 - accuracy: 0.0000e+00 - val_loss: 0.0208 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.3120e-04 - accuracy: 0.0000e+00 - val_loss: 0.0211 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.2627e-04 - accuracy: 0.0000e+00 - val_loss: 0.0215 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 385ms/epoch - 4ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.2152e-04 - accuracy: 0.0000e+00 - val_loss: 0.0218 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.1695e-04 - accuracy: 0.0000e+00 - val_loss: 0.0222 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.1254e-04 - accuracy: 0.0000e+00 - val_loss: 0.0225 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.0830e-04 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.0420e-04 - accuracy: 0.0000e+00 - val_loss: 0.0232 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 386ms/epoch - 4ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00668 90/90 - 0s - loss: 9.0026e-04 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00668 90/90 - 0s - loss: 8.9645e-04 - accuracy: 0.0000e+00 - val_loss: 0.0239 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00668 90/90 - 0s - loss: 8.9277e-04 - accuracy: 0.0000e+00 - val_loss: 0.0242 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 399ms/epoch - 4ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00668 90/90 - 0s - loss: 8.8922e-04 - accuracy: 0.0000e+00 - val_loss: 0.0245 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00668 90/90 - 0s - loss: 8.8578e-04 - accuracy: 0.0000e+00 - val_loss: 0.0248 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 383ms/epoch - 4ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.00668 90/90 - 0s - loss: 8.8246e-04 - accuracy: 0.0000e+00 - val_loss: 0.0252 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.00668 90/90 - 0s - loss: 8.7923e-04 - accuracy: 0.0000e+00 - val_loss: 0.0255 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step Epoch 00062: early stopping
SMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 60.485697397526344 RMSE: 7.777255132598284 MAPE: 6.358945125308518 EMA Prediction vs Close: 54.85% Accuracy Prediction vs Prediction: 43.66% Accuracy MSE: 58.20305175219876 RMSE: 7.629092459277103 MAPE: 6.21442849961768 WMA Prediction vs Close: 54.48% Accuracy Prediction vs Prediction: 46.27% Accuracy MSE: 70.88350276857014 RMSE: 8.419234096316014 MAPE: 6.6789569931753 DEMA Prediction vs Close: 52.61% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 119.53246002468391 RMSE: 10.933090140700566 MAPE: 9.747683697911842 KAMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 61.13308833987969 RMSE: 7.818765141624327 MAPE: 6.461585168646619 MIDPOINT Prediction vs Close: 51.49% Accuracy Prediction vs Prediction: 45.9% Accuracy MSE: 61.5384692642518 RMSE: 7.8446458979517875 MAPE: 6.407298993379305 T3 Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 45.9% Accuracy MSE: 163.02597008234568 RMSE: 12.768162361214932 MAPE: 10.498544939048504 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 66.14227466119469 RMSE: 8.132790090811067 MAPE: 7.1170786919128775 Runtime: mins: 47.84116581131666
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment6.png to Experiment6 (2).png
img = cv2.imread('Experiment6.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fcec1090550>
with open('simulation6_data.json') as json_file:
simulation6 = json.load(json_file)
fileimg = 'Experiment6'
for i in range(len(list(simulation6.keys()))):
SIM = list(simulation6.keys())[i]
plot_train(simulation6,SIM)
plot_test(simulation6,SIM)
----- Train RMSE for SMA ----- 8.82965892571068 ----- Train_MSE_LSTM for SMA ----- 77.96287674438227 ----- Train MAE LSTM for SMA ----- 7.7139597657818975
----- Test RMSE for SMA----- 7.777255132598284 ----- Test_MSE_LSTM for SMA----- 60.485697397526344 ----- Test_MAE_LSTM for SMA----- 6.358945125308518
----- Train RMSE for EMA ----- 10.171315551915972 ----- Train_MSE_LSTM for EMA ----- 103.45566005664772 ----- Train MAE LSTM for EMA ----- 8.993942033814147
----- Test RMSE for EMA----- 7.629092459277103 ----- Test_MSE_LSTM for EMA----- 58.20305175219876 ----- Test_MAE_LSTM for EMA----- 6.21442849961768
----- Train RMSE for WMA ----- 10.41897039644979 ----- Train_MSE_LSTM for WMA ----- 108.5549441220971 ----- Train MAE LSTM for WMA ----- 9.308931496257705
----- Test RMSE for WMA----- 8.419234096316014 ----- Test_MSE_LSTM for WMA----- 70.88350276857014 ----- Test_MAE_LSTM for WMA----- 6.6789569931753
----- Train RMSE for DEMA ----- 12.03541858653864 ----- Train_MSE_LSTM for DEMA ----- 144.85130055319976 ----- Train MAE LSTM for DEMA ----- 10.796683445353278
----- Test RMSE for DEMA----- 10.933090140700566 ----- Test_MSE_LSTM for DEMA----- 119.53246002468391 ----- Test_MAE_LSTM for DEMA----- 9.747683697911842
----- Train RMSE for KAMA ----- 10.508593318849396 ----- Train_MSE_LSTM for KAMA ----- 110.43053354096617 ----- Train MAE LSTM for KAMA ----- 9.452300205135183
----- Test RMSE for KAMA----- 7.818765141624327 ----- Test_MSE_LSTM for KAMA----- 61.13308833987969 ----- Test_MAE_LSTM for KAMA----- 6.461585168646619
----- Train RMSE for MIDPOINT ----- 9.440601010764405 ----- Train_MSE_LSTM for MIDPOINT ----- 89.12494744444592 ----- Train MAE LSTM for MIDPOINT ----- 8.391389348327055
----- Test RMSE for MIDPOINT----- 7.8446458979517875 ----- Test_MSE_LSTM for MIDPOINT----- 61.5384692642518 ----- Test_MAE_LSTM for MIDPOINT----- 6.407298993379305
----- Train RMSE for T3 ----- 11.992355702371695 ----- Train_MSE_LSTM for T3 ----- 143.81659529220693 ----- Train MAE LSTM for T3 ----- 10.767362229981385
----- Test RMSE for T3----- 12.768162361214932 ----- Test_MSE_LSTM for T3----- 163.02597008234568 ----- Test_MAE_LSTM for T3----- 10.498544939048504
----- Train RMSE for TEMA ----- 7.421204593553642 ----- Train_MSE_LSTM for TEMA ----- 55.07427761938168 ----- Train MAE LSTM for TEMA ----- 5.137645379344801
----- Test RMSE for TEMA----- 8.132790090811067 ----- Test_MSE_LSTM for TEMA----- 66.14227466119469 ----- Test_MAE_LSTM for TEMA----- 7.1170786919128775
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det =20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma+' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
#
class Double_Tanh(Activation):
def __init__(self, activation, **kwargs):
super(Double_Tanh, self).__init__(activation, **kwargs)
self.__name__ = 'double_tanh'
def double_tanh(x):
return (K.tanh(x) * 2)
get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# Model Generation
model = Sequential()
#check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
model.add(Dense(1))
model.add(Activation(double_tanh))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation7 = {}
imgfile = 'Experiment7'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation7[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation7_data.json', 'w') as fp:
json.dump(simulation7, fp)
for ma in simulation7.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation7[ma]['final']['mse'],
'\nRMSE:\t', simulation7[ma]['final']['rmse'],
'\nMAPE:\t', simulation7[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-14771.778, Time=12.60 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14135.387, Time=6.27 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15280.870, Time=10.58 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15393.475, Time=8.23 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-14981.217, Time=5.02 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14516.868, Time=13.87 sec
ARIMA(0,3,1)(0,0,0)[0] intercept : AIC=-15663.967, Time=10.19 sec
ARIMA(0,3,0)(0,0,0)[0] intercept : AIC=-13838.679, Time=5.27 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-14734.479, Time=6.54 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-14866.409, Time=7.59 sec
ARIMA(1,3,0)(0,0,0)[0] intercept : AIC=-16157.403, Time=13.73 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-14855.623, Time=10.93 sec
ARIMA(2,3,1)(0,0,0)[0] intercept : AIC=-14720.644, Time=11.37 sec
Best model: ARIMA(1,3,0)(0,0,0)[0] intercept
Total fit time: 122.215 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 0) Log Likelihood 8103.701
Date: Sun, 12 Dec 2021 AIC -16157.403
Time: 18:54:25 BIC -16040.132
Sample: 0 HQIC -16112.366
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept -2.802e-06 7.54e-07 -3.714 0.000 -4.28e-06 -1.32e-06
x1 -2.598e-05 0.001 -0.041 0.967 -0.001 0.001
x2 -2.599e-05 0.001 -0.047 0.963 -0.001 0.001
x3 -2.615e-05 0.001 -0.038 0.970 -0.001 0.001
x4 1.0000 0.001 1507.083 0.000 0.999 1.001
x5 -2.485e-05 0.001 -0.038 0.970 -0.001 0.001
x6 -2.807e-05 3.32e-05 -0.845 0.398 -9.32e-05 3.71e-05
x7 -2.593e-05 8.29e-05 -0.313 0.755 -0.000 0.000
x8 0.0019 7.15e-05 26.753 0.000 0.002 0.002
x9 -1.867e-06 0.001 -0.003 0.998 -0.001 0.001
x10 0.0003 0.000 0.644 0.520 -0.001 0.001
x11 -0.0025 8.93e-05 -28.145 0.000 -0.003 -0.002
x12 0.0015 8.06e-05 18.290 0.000 0.001 0.002
x13 -2.61e-05 0.000 -0.076 0.939 -0.001 0.001
x14 -7.719e-05 0.000 -0.374 0.708 -0.000 0.000
x15 -2.829e-05 8.57e-05 -0.330 0.741 -0.000 0.000
x16 -2.424e-05 0.000 -0.142 0.887 -0.000 0.000
x17 -2.292e-05 9.81e-05 -0.234 0.815 -0.000 0.000
x18 -4.39e-05 0.000 -0.429 0.668 -0.000 0.000
x19 -3.005e-05 0.000 -0.293 0.770 -0.000 0.000
x20 4.559e-05 9.36e-05 0.487 0.626 -0.000 0.000
x21 -7.981e-10 0.001 -9.88e-07 1.000 -0.002 0.002
x22 -1.557e-08 0.000 -0.000 1.000 -0.000 0.000
ar.L1 -0.6667 6.95e-05 -9587.073 0.000 -0.667 -0.667
sigma2 1.314e-10 7.8e-11 1.686 0.092 -2.14e-11 2.84e-10
===================================================================================
Ljung-Box (L1) (Q): 90.59 Jarque-Bera (JB): 3138023.60
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 5.01
Prob(H) (two-sided): 0.00 Kurtosis: 308.71
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.36e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.51163, saving model to LSTM7.h5
48/48 - 3s - loss: 0.0787 - mse: 0.0787 - mae: 0.2232 - val_loss: 0.5116 - val_mse: 0.5116 - val_mae: 0.6749 - lr: 0.0010 - 3s/epoch - 54ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.51163 to 0.10215, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0240 - mse: 0.0240 - mae: 0.1254 - val_loss: 0.1022 - val_mse: 0.1022 - val_mae: 0.2690 - lr: 0.0010 - 187ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.10215 to 0.08625, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0239 - mse: 0.0239 - mae: 0.1220 - val_loss: 0.0862 - val_mse: 0.0862 - val_mae: 0.2475 - lr: 0.0010 - 183ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.08625 to 0.08622, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0209 - mse: 0.0209 - mae: 0.1105 - val_loss: 0.0862 - val_mse: 0.0862 - val_mae: 0.2503 - lr: 0.0010 - 184ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.08622 to 0.05556, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1013 - val_loss: 0.0556 - val_mse: 0.0556 - val_mae: 0.1965 - lr: 0.0010 - 183ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.05556
48/48 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0951 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2312 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.05556 to 0.05386, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0176 - mse: 0.0176 - mae: 0.0998 - val_loss: 0.0539 - val_mse: 0.0539 - val_mae: 0.1947 - lr: 0.0010 - 204ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05386
48/48 - 0s - loss: 0.0178 - mse: 0.0178 - mae: 0.0999 - val_loss: 0.0788 - val_mse: 0.0788 - val_mae: 0.2416 - lr: 0.0010 - 171ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.05386 to 0.03839, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0188 - mse: 0.0188 - mae: 0.1056 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1620 - lr: 0.0010 - 189ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.03839
48/48 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0995 - val_loss: 0.0913 - val_mse: 0.0913 - val_mae: 0.2627 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.03839 to 0.02747, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0226 - mse: 0.0226 - mae: 0.1166 - val_loss: 0.0275 - val_mse: 0.0275 - val_mae: 0.1359 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.02747
48/48 - 0s - loss: 0.0181 - mse: 0.0181 - mae: 0.1066 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2324 - lr: 0.0010 - 182ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.02747 to 0.02661, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0239 - mse: 0.0239 - mae: 0.1240 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1316 - lr: 0.0010 - 186ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.02661
48/48 - 0s - loss: 0.0213 - mse: 0.0213 - mae: 0.1188 - val_loss: 0.0989 - val_mse: 0.0989 - val_mae: 0.2754 - lr: 0.0010 - 179ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.02661 to 0.01829, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0232 - mse: 0.0232 - mae: 0.1254 - val_loss: 0.0183 - val_mse: 0.0183 - val_mae: 0.1067 - lr: 0.0010 - 203ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01829
48/48 - 0s - loss: 0.0196 - mse: 0.0196 - mae: 0.1180 - val_loss: 0.1224 - val_mse: 0.1224 - val_mae: 0.3113 - lr: 0.0010 - 173ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01829
48/48 - 0s - loss: 0.0210 - mse: 0.0210 - mae: 0.1213 - val_loss: 0.0197 - val_mse: 0.0197 - val_mae: 0.1084 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01829
48/48 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.1043 - val_loss: 0.1529 - val_mse: 0.1529 - val_mae: 0.3513 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.01829 to 0.01744, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0173 - mse: 0.0173 - mae: 0.1106 - val_loss: 0.0174 - val_mse: 0.0174 - val_mae: 0.1021 - lr: 0.0010 - 180ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0923 - val_loss: 0.1679 - val_mse: 0.1679 - val_mae: 0.3697 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.1007 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1205 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0840 - val_loss: 0.1515 - val_mse: 0.1515 - val_mae: 0.3473 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0849 - val_loss: 0.0436 - val_mse: 0.0436 - val_mae: 0.1599 - lr: 0.0010 - 175ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00024: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0752 - val_loss: 0.1504 - val_mse: 0.1504 - val_mae: 0.3461 - lr: 0.0010 - 175ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0910 - val_loss: 0.1090 - val_mse: 0.1090 - val_mae: 0.2871 - lr: 1.0000e-04 - 172ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0608 - val_loss: 0.0960 - val_mse: 0.0960 - val_mae: 0.2657 - lr: 1.0000e-04 - 175ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.0889 - val_mse: 0.0889 - val_mae: 0.2534 - lr: 1.0000e-04 - 173ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0534 - val_loss: 0.0851 - val_mse: 0.0851 - val_mae: 0.2463 - lr: 1.0000e-04 - 177ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00029: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0563 - val_loss: 0.0812 - val_mse: 0.0812 - val_mae: 0.2388 - lr: 1.0000e-04 - 175ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0525 - val_loss: 0.0809 - val_mse: 0.0809 - val_mae: 0.2383 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.0805 - val_mse: 0.0805 - val_mae: 0.2376 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0538 - val_loss: 0.0802 - val_mse: 0.0802 - val_mae: 0.2369 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0800 - val_mse: 0.0800 - val_mae: 0.2366 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00034: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0799 - val_mse: 0.0799 - val_mae: 0.2364 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0798 - val_mse: 0.0798 - val_mae: 0.2361 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0522 - val_loss: 0.0794 - val_mse: 0.0794 - val_mae: 0.2355 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0533 - val_loss: 0.0795 - val_mse: 0.0795 - val_mae: 0.2355 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0555 - val_loss: 0.0794 - val_mse: 0.0794 - val_mae: 0.2353 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0502 - val_loss: 0.0793 - val_mse: 0.0793 - val_mae: 0.2351 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.0790 - val_mse: 0.0790 - val_mae: 0.2346 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0517 - val_loss: 0.0788 - val_mse: 0.0788 - val_mae: 0.2342 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0789 - val_mse: 0.0789 - val_mae: 0.2344 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0542 - val_loss: 0.0787 - val_mse: 0.0787 - val_mae: 0.2339 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0521 - val_loss: 0.0786 - val_mse: 0.0786 - val_mae: 0.2337 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0784 - val_mse: 0.0784 - val_mae: 0.2332 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.0783 - val_mse: 0.0783 - val_mae: 0.2330 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0522 - val_loss: 0.0784 - val_mse: 0.0784 - val_mae: 0.2332 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0512 - val_loss: 0.0785 - val_mse: 0.0785 - val_mae: 0.2333 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0543 - val_loss: 0.0785 - val_mse: 0.0785 - val_mae: 0.2334 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0513 - val_loss: 0.0788 - val_mse: 0.0788 - val_mae: 0.2338 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0533 - val_loss: 0.0784 - val_mse: 0.0784 - val_mae: 0.2330 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0531 - val_loss: 0.0787 - val_mse: 0.0787 - val_mae: 0.2337 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0514 - val_loss: 0.0786 - val_mse: 0.0786 - val_mae: 0.2335 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0530 - val_loss: 0.0786 - val_mse: 0.0786 - val_mae: 0.2334 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0541 - val_loss: 0.0787 - val_mse: 0.0787 - val_mae: 0.2336 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0540 - val_loss: 0.0787 - val_mse: 0.0787 - val_mae: 0.2335 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0555 - val_loss: 0.0786 - val_mse: 0.0786 - val_mae: 0.2334 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0518 - val_loss: 0.0787 - val_mse: 0.0787 - val_mae: 0.2336 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0502 - val_loss: 0.0785 - val_mse: 0.0785 - val_mae: 0.2332 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0534 - val_loss: 0.0778 - val_mse: 0.0778 - val_mae: 0.2319 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0522 - val_loss: 0.0780 - val_mse: 0.0780 - val_mae: 0.2321 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.0778 - val_mse: 0.0778 - val_mae: 0.2318 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0543 - val_loss: 0.0776 - val_mse: 0.0776 - val_mae: 0.2314 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.0774 - val_mse: 0.0774 - val_mae: 0.2309 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0527 - val_loss: 0.0772 - val_mse: 0.0772 - val_mae: 0.2305 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0553 - val_loss: 0.0770 - val_mse: 0.0770 - val_mae: 0.2300 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.0773 - val_mse: 0.0773 - val_mae: 0.2306 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0779 - val_mse: 0.0779 - val_mae: 0.2317 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.01744
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.0783 - val_mse: 0.0783 - val_mae: 0.2325 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 00069: early stopping
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.831, Time=2.51 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.39 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16288.946, Time=7.13 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=5.85 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16226.419, Time=11.51 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-13742.844, Time=8.75 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16101.256, Time=19.30 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17006.489, Time=2.83 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.686, Time=3.23 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17086.654, Time=6.68 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=-16097.512, Time=16.53 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17002.132, Time=3.74 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-17004.011, Time=4.35 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 96.817 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8570.327
Date: Sun, 12 Dec 2021 AIC -17086.654
Time: 18:57:07 BIC -16960.001
Sample: 0 HQIC -17038.014
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.333e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x2 -2.326e-10 9.29e-21 -2.5e+10 0.000 -2.33e-10 -2.33e-10
x3 -2.342e-10 9.32e-21 -2.51e+10 0.000 -2.34e-10 -2.34e-10
x4 1.0000 9.31e-21 1.07e+20 0.000 1.000 1.000
x5 -2.121e-10 8.87e-21 -2.39e+10 0.000 -2.12e-10 -2.12e-10
x6 -8.055e-10 1.64e-20 -4.9e+10 0.000 -8.05e-10 -8.05e-10
x7 -2.312e-10 9.27e-21 -2.49e+10 0.000 -2.31e-10 -2.31e-10
x8 -2.26e-10 9.17e-21 -2.47e+10 0.000 -2.26e-10 -2.26e-10
x9 -1.174e-11 1.86e-21 -6.3e+09 0.000 -1.17e-11 -1.17e-11
x10 -4.486e-11 3.98e-21 -1.13e+10 0.000 -4.49e-11 -4.49e-11
x11 -2.235e-10 9.11e-21 -2.45e+10 0.000 -2.23e-10 -2.23e-10
x12 -2.28e-10 9.21e-21 -2.48e+10 0.000 -2.28e-10 -2.28e-10
x13 -2.332e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x14 -1.78e-09 2.57e-20 -6.92e+10 0.000 -1.78e-09 -1.78e-09
x15 -2.118e-10 8.84e-21 -2.4e+10 0.000 -2.12e-10 -2.12e-10
x16 -5.28e-10 1.4e-20 -3.76e+10 0.000 -5.28e-10 -5.28e-10
x17 -2.173e-10 8.94e-21 -2.43e+10 0.000 -2.17e-10 -2.17e-10
x18 -3.83e-11 3.74e-21 -1.02e+10 0.000 -3.83e-11 -3.83e-11
x19 -2.606e-10 9.86e-21 -2.64e+10 0.000 -2.61e-10 -2.61e-10
x20 -2.433e-10 9.48e-21 -2.57e+10 0.000 -2.43e-10 -2.43e-10
x21 -3.774e-13 1.42e-24 -2.65e+11 0.000 -3.77e-13 -3.77e-13
x22 -1.096e-11 1.35e-24 -8.11e+12 0.000 -1.1e-11 -1.1e-11
ar.L1 -0.4919 1.5e-22 -3.27e+21 0.000 -0.492 -0.492
ar.L2 -0.1922 8.41e-23 -2.28e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.01e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7070 3.34e-22 -2.12e+21 0.000 -0.707 -0.707
sigma2 8.977e-11 6.95e-11 1.291 0.197 -4.65e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.80 Jarque-Bera (JB): 4212163.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.43
Prob(H) (two-sided): 0.00 Kurtosis: 357.21
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.65e+43. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.31018, saving model to LSTM7.h5
16/16 - 2s - loss: 0.0462 - mse: 0.0462 - mae: 0.1644 - val_loss: 0.3102 - val_mse: 0.3102 - val_mae: 0.5170 - lr: 0.0010 - 2s/epoch - 137ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.31018 to 0.08240, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0570 - mse: 0.0570 - mae: 0.1988 - val_loss: 0.0824 - val_mse: 0.0824 - val_mae: 0.2560 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.08240 to 0.05511, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0971 - val_loss: 0.0551 - val_mse: 0.0551 - val_mae: 0.2073 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.05511 to 0.02778, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1049 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1439 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.02778 to 0.02773, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0858 - val_loss: 0.0277 - val_mse: 0.0277 - val_mae: 0.1434 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.02773 to 0.02567, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0793 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1374 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0771 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1380 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0746 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1437 - lr: 0.0010 - 80ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0763 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1454 - lr: 0.0010 - 78ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0708 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1508 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00011: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0762 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1528 - lr: 0.0010 - 79ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0701 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1499 - lr: 1.0000e-04 - 71ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0673 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1480 - lr: 1.0000e-04 - 74ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0671 - val_loss: 0.0297 - val_mse: 0.0297 - val_mae: 0.1475 - lr: 1.0000e-04 - 69ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0623 - val_loss: 0.0304 - val_mse: 0.0304 - val_mae: 0.1492 - lr: 1.0000e-04 - 73ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00016: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0634 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1505 - lr: 1.0000e-04 - 73ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1506 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0642 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1506 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0635 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1506 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0632 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1506 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00021: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0623 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1508 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0636 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1509 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0636 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1509 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0667 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1510 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0620 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1511 - lr: 1.0000e-05 - 72ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1511 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0619 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1512 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0644 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1514 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0637 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1514 - lr: 1.0000e-05 - 72ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0616 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1514 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0657 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1512 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1510 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0646 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1511 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1512 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0639 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1510 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0662 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1511 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0619 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1512 - lr: 1.0000e-05 - 72ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0618 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1511 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0627 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1512 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0626 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1512 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0625 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1513 - lr: 1.0000e-05 - 72ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1514 - lr: 1.0000e-05 - 70ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0635 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1515 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1515 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0614 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1515 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1518 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0629 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1521 - lr: 1.0000e-05 - 73ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0645 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1524 - lr: 1.0000e-05 - 69ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0632 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1523 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1525 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0650 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1526 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0641 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1526 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0638 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1525 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0640 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1524 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1527 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.02567
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0633 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1529 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 35.056668726825066
RMSE: 5.920867227596399
MAPE: 4.704877912816018
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16080.357, Time=11.01 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14973.799, Time=5.84 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15549.629, Time=1.74 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.999, Time=8.07 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16061.924, Time=9.28 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15376.406, Time=14.58 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16186.215, Time=3.34 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15308.706, Time=13.75 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-14920.393, Time=13.37 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16184.203, Time=2.99 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 83.993 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8118.107
Date: Sun, 12 Dec 2021 AIC -16186.215
Time: 19:06:46 BIC -16068.944
Sample: 0 HQIC -16141.178
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -9.919e-15 0.000 -8.4e-11 1.000 -0.000 0.000
x2 3.194e-15 6.3e-05 5.07e-11 1.000 -0.000 0.000
x3 3.066e-15 7.71e-05 3.98e-11 1.000 -0.000 0.000
x4 1.0000 4.4e-05 2.27e+04 0.000 1.000 1.000
x5 -3.977e-15 4.68e-05 -8.49e-11 1.000 -9.18e-05 9.18e-05
x6 -5.906e-17 8.34e-05 -7.08e-13 1.000 -0.000 0.000
x7 -8.726e-15 7.85e-05 -1.11e-10 1.000 -0.000 0.000
x8 0.0014 4.94e-05 27.704 0.000 0.001 0.001
x9 -3.542e-15 0.001 -2.63e-12 1.000 -0.003 0.003
x10 -0.0012 0.001 -1.566 0.117 -0.003 0.000
x11 0.0052 3.01e-05 172.396 0.000 0.005 0.005
x12 -0.0065 0.000 -49.747 0.000 -0.007 -0.006
x13 1.963e-14 7.85e-05 2.5e-10 1.000 -0.000 0.000
x14 -2.134e-14 0.000 -1.01e-10 1.000 -0.000 0.000
x15 3.464e-12 0.000 2.92e-08 1.000 -0.000 0.000
x16 -7.174e-13 6.45e-05 -1.11e-08 1.000 -0.000 0.000
x17 2.537e-13 7.42e-05 3.42e-09 1.000 -0.000 0.000
x18 -2.964e-15 0.000 -7.78e-12 1.000 -0.001 0.001
x19 -3.613e-12 8.67e-05 -4.17e-08 1.000 -0.000 0.000
x20 6.244e-14 0.000 2.1e-10 1.000 -0.001 0.001
x21 -4.242e-16 0.000 -1.47e-12 1.000 -0.001 0.001
x22 -2.128e-15 0.001 -1.74e-12 1.000 -0.002 0.002
ma.L1 -1.3894 4.16e-05 -3.34e+04 0.000 -1.389 -1.389
ma.L2 0.4036 0.000 3637.465 0.000 0.403 0.404
sigma2 1.287e-10 7.27e-11 1.770 0.077 -1.38e-11 2.71e-10
===================================================================================
Ljung-Box (L1) (Q): 69.00 Jarque-Bera (JB): 6269147.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.07
Prob(H) (two-sided): 0.00 Kurtosis: 434.65
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.47e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.16222, saving model to LSTM7.h5
17/17 - 3s - loss: 0.2269 - mse: 0.2269 - mae: 0.3844 - val_loss: 0.1622 - val_mse: 0.1622 - val_mae: 0.3887 - lr: 0.0010 - 3s/epoch - 151ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.16222 to 0.14651, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0613 - mse: 0.0613 - mae: 0.2144 - val_loss: 0.1465 - val_mse: 0.1465 - val_mae: 0.3680 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.14651 to 0.11348, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0237 - mse: 0.0237 - mae: 0.1218 - val_loss: 0.1135 - val_mse: 0.1135 - val_mae: 0.3203 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.11348 to 0.07060, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0231 - mse: 0.0231 - mae: 0.1214 - val_loss: 0.0706 - val_mse: 0.0706 - val_mae: 0.2457 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.07060 to 0.06062, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0175 - mse: 0.0175 - mae: 0.1070 - val_loss: 0.0606 - val_mse: 0.0606 - val_mae: 0.2244 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.06062 to 0.04813, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1023 - val_loss: 0.0481 - val_mse: 0.0481 - val_mae: 0.1959 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04813
17/17 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0990 - val_loss: 0.0483 - val_mse: 0.0483 - val_mae: 0.1965 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.04813 to 0.04461, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0979 - val_loss: 0.0446 - val_mse: 0.0446 - val_mae: 0.1874 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04461
17/17 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0869 - val_loss: 0.0475 - val_mse: 0.0475 - val_mae: 0.1943 - lr: 0.0010 - 72ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.04461 to 0.04192, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0900 - val_loss: 0.0419 - val_mse: 0.0419 - val_mae: 0.1804 - lr: 0.0010 - 89ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0854 - val_loss: 0.0447 - val_mse: 0.0447 - val_mae: 0.1875 - lr: 0.0010 - 75ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0841 - val_loss: 0.0463 - val_mse: 0.0463 - val_mae: 0.1918 - lr: 0.0010 - 72ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0805 - val_loss: 0.0501 - val_mse: 0.0501 - val_mae: 0.2007 - lr: 0.0010 - 74ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0802 - val_loss: 0.0472 - val_mse: 0.0472 - val_mae: 0.1935 - lr: 0.0010 - 79ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00015: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0771 - val_loss: 0.0539 - val_mse: 0.0539 - val_mae: 0.2088 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0766 - val_loss: 0.0531 - val_mse: 0.0531 - val_mae: 0.2070 - lr: 1.0000e-04 - 82ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0732 - val_loss: 0.0516 - val_mse: 0.0516 - val_mae: 0.2038 - lr: 1.0000e-04 - 79ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0742 - val_loss: 0.0511 - val_mse: 0.0511 - val_mae: 0.2026 - lr: 1.0000e-04 - 81ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0725 - val_loss: 0.0510 - val_mse: 0.0510 - val_mae: 0.2024 - lr: 1.0000e-04 - 80ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00020: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0716 - val_loss: 0.0513 - val_mse: 0.0513 - val_mae: 0.2032 - lr: 1.0000e-04 - 75ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0756 - val_loss: 0.0514 - val_mse: 0.0514 - val_mae: 0.2033 - lr: 1.0000e-05 - 74ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0747 - val_loss: 0.0513 - val_mse: 0.0513 - val_mae: 0.2033 - lr: 1.0000e-05 - 73ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0725 - val_loss: 0.0513 - val_mse: 0.0513 - val_mae: 0.2032 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0708 - val_loss: 0.0514 - val_mse: 0.0514 - val_mae: 0.2033 - lr: 1.0000e-05 - 75ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00025: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0713 - val_loss: 0.0514 - val_mse: 0.0514 - val_mae: 0.2034 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0716 - val_loss: 0.0514 - val_mse: 0.0514 - val_mae: 0.2035 - lr: 1.0000e-05 - 76ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0713 - val_loss: 0.0515 - val_mse: 0.0515 - val_mae: 0.2036 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0743 - val_loss: 0.0515 - val_mse: 0.0515 - val_mae: 0.2036 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0706 - val_loss: 0.0515 - val_mse: 0.0515 - val_mae: 0.2036 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0710 - val_loss: 0.0516 - val_mse: 0.0516 - val_mae: 0.2038 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0715 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0736 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2040 - lr: 1.0000e-05 - 76ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0721 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2040 - lr: 1.0000e-05 - 73ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0723 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2040 - lr: 1.0000e-05 - 76ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0722 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 76ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0710 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0706 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2042 - lr: 1.0000e-05 - 72ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0720 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 75ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0732 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2042 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0698 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0705 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2042 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0706 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2042 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0714 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0711 - val_loss: 0.0516 - val_mse: 0.0516 - val_mae: 0.2040 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0725 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2040 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0766 - val_loss: 0.0516 - val_mse: 0.0516 - val_mae: 0.2040 - lr: 1.0000e-05 - 74ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0722 - val_loss: 0.0516 - val_mse: 0.0516 - val_mae: 0.2040 - lr: 1.0000e-05 - 75ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0691 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2041 - lr: 1.0000e-05 - 74ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0712 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2040 - lr: 1.0000e-05 - 75ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0684 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2042 - lr: 1.0000e-05 - 75ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0703 - val_loss: 0.0519 - val_mse: 0.0519 - val_mae: 0.2045 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0700 - val_loss: 0.0519 - val_mse: 0.0519 - val_mae: 0.2047 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0687 - val_loss: 0.0520 - val_mse: 0.0520 - val_mae: 0.2048 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0711 - val_loss: 0.0520 - val_mse: 0.0520 - val_mae: 0.2049 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0693 - val_loss: 0.0521 - val_mse: 0.0521 - val_mae: 0.2050 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0696 - val_loss: 0.0521 - val_mse: 0.0521 - val_mae: 0.2051 - lr: 1.0000e-05 - 76ms/epoch - 4ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0704 - val_loss: 0.0522 - val_mse: 0.0522 - val_mae: 0.2052 - lr: 1.0000e-05 - 73ms/epoch - 4ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0723 - val_loss: 0.0522 - val_mse: 0.0522 - val_mae: 0.2052 - lr: 1.0000e-05 - 75ms/epoch - 4ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0715 - val_loss: 0.0523 - val_mse: 0.0523 - val_mae: 0.2054 - lr: 1.0000e-05 - 76ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.04192
17/17 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0702 - val_loss: 0.0523 - val_mse: 0.0523 - val_mae: 0.2055 - lr: 1.0000e-05 - 73ms/epoch - 4ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 35.056668726825066
RMSE: 5.920867227596399
MAPE: 4.704877912816018
WMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 44.87192646385527
RMSE: 6.698651092858566
MAPE: 5.33068935026581
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.780, Time=2.53 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.33 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15584.877, Time=8.18 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=5.61 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15271.475, Time=7.54 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15128.422, Time=10.03 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16352.675, Time=16.56 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.022, Time=5.28 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.621, Time=3.19 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.445, Time=6.00 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=15.71 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17001.997, Time=3.29 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16996.668, Time=4.26 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 92.506 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.723
Date: Sun, 12 Dec 2021 AIC -17085.445
Time: 19:12:20 BIC -16958.792
Sample: 0 HQIC -17036.805
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 1.36e-20 -2.05e+10 0.000 -2.8e-10 -2.8e-10
x2 -2.817e-10 1.37e-20 -2.06e+10 0.000 -2.82e-10 -2.82e-10
x3 -2.805e-10 1.36e-20 -2.06e+10 0.000 -2.8e-10 -2.8e-10
x4 1.0000 1.37e-20 7.33e+19 0.000 1.000 1.000
x5 -2.598e-10 1.31e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x6 -1.389e-09 2.98e-20 -4.66e+10 0.000 -1.39e-09 -1.39e-09
x7 -2.789e-10 1.36e-20 -2.05e+10 0.000 -2.79e-10 -2.79e-10
x8 -2.761e-10 1.35e-20 -2.04e+10 0.000 -2.76e-10 -2.76e-10
x9 -2.219e-12 3.36e-22 -6.6e+09 0.000 -2.22e-12 -2.22e-12
x10 -1.345e-10 9.37e-21 -1.43e+10 0.000 -1.34e-10 -1.34e-10
x11 -2.899e-10 1.39e-20 -2.09e+10 0.000 -2.9e-10 -2.9e-10
x12 -2.602e-10 1.32e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x13 -2.807e-10 1.36e-20 -2.06e+10 0.000 -2.81e-10 -2.81e-10
x14 -1.87e-09 3.52e-20 -5.31e+10 0.000 -1.87e-09 -1.87e-09
x15 -2.825e-10 1.37e-20 -2.07e+10 0.000 -2.82e-10 -2.82e-10
x16 -8.187e-11 7.33e-21 -1.12e+10 0.000 -8.19e-11 -8.19e-11
x17 -2.441e-10 1.27e-20 -1.92e+10 0.000 -2.44e-10 -2.44e-10
x18 -6.411e-10 2.06e-20 -3.11e+10 0.000 -6.41e-10 -6.41e-10
x19 -2.929e-10 1.39e-20 -2.11e+10 0.000 -2.93e-10 -2.93e-10
x20 -4.339e-10 1.7e-20 -2.56e+10 0.000 -4.34e-10 -4.34e-10
x21 -3.589e-13 2.52e-24 -1.42e+11 0.000 -3.59e-13 -3.59e-13
x22 -1.088e-11 2.36e-24 -4.6e+12 0.000 -1.09e-11 -1.09e-11
ar.L1 -0.4923 1.46e-22 -3.37e+21 0.000 -0.492 -0.492
ar.L2 -0.1923 8.47e-23 -2.27e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.02e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7077 3.31e-22 -2.14e+21 0.000 -0.708 -0.708
sigma2 8.99e-11 6.95e-11 1.293 0.196 -4.64e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 55.15 Jarque-Bera (JB): 4171184.78
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.27
Prob(H) (two-sided): 0.00 Kurtosis: 355.49
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.53e+42. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 1.15301, saving model to LSTM7.h5
10/10 - 2s - loss: 0.7441 - mse: 0.7441 - mae: 0.7483 - val_loss: 1.1530 - val_mse: 1.1530 - val_mae: 1.0424 - lr: 0.0010 - 2s/epoch - 226ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 1.15301 to 0.75446, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0660 - mse: 0.0660 - mae: 0.2157 - val_loss: 0.7545 - val_mse: 0.7545 - val_mae: 0.8393 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.75446 to 0.60415, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0798 - mse: 0.0798 - mae: 0.2423 - val_loss: 0.6041 - val_mse: 0.6041 - val_mae: 0.7490 - lr: 0.0010 - 75ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.60415 to 0.54999, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0518 - mse: 0.0518 - mae: 0.1903 - val_loss: 0.5500 - val_mse: 0.5500 - val_mae: 0.7145 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.54999 to 0.49319, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0293 - mse: 0.0293 - mae: 0.1400 - val_loss: 0.4932 - val_mse: 0.4932 - val_mae: 0.6759 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.49319 to 0.41422, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0290 - mse: 0.0290 - mae: 0.1384 - val_loss: 0.4142 - val_mse: 0.4142 - val_mae: 0.6180 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.41422 to 0.34809, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0240 - mse: 0.0240 - mae: 0.1267 - val_loss: 0.3481 - val_mse: 0.3481 - val_mae: 0.5651 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.34809 to 0.30162, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0225 - mse: 0.0225 - mae: 0.1230 - val_loss: 0.3016 - val_mse: 0.3016 - val_mae: 0.5249 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.30162 to 0.27254, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0194 - mse: 0.0194 - mae: 0.1148 - val_loss: 0.2725 - val_mse: 0.2725 - val_mae: 0.4983 - lr: 0.0010 - 65ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.27254 to 0.25320, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0173 - mse: 0.0173 - mae: 0.1055 - val_loss: 0.2532 - val_mse: 0.2532 - val_mae: 0.4800 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.25320 to 0.24192, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.1016 - val_loss: 0.2419 - val_mse: 0.2419 - val_mae: 0.4693 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.24192 to 0.23391, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.1009 - val_loss: 0.2339 - val_mse: 0.2339 - val_mae: 0.4617 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.23391 to 0.22569, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0915 - val_loss: 0.2257 - val_mse: 0.2257 - val_mae: 0.4536 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.22569 to 0.21044, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0898 - val_loss: 0.2104 - val_mse: 0.2104 - val_mae: 0.4376 - lr: 0.0010 - 70ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.21044 to 0.19835, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0921 - val_loss: 0.1984 - val_mse: 0.1984 - val_mae: 0.4245 - lr: 0.0010 - 78ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.19835 to 0.19224, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0867 - val_loss: 0.1922 - val_mse: 0.1922 - val_mae: 0.4180 - lr: 0.0010 - 75ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.19224 to 0.18878, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0856 - val_loss: 0.1888 - val_mse: 0.1888 - val_mae: 0.4144 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.18878 to 0.18213, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0811 - val_loss: 0.1821 - val_mse: 0.1821 - val_mae: 0.4069 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.18213
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0804 - val_loss: 0.1826 - val_mse: 0.1826 - val_mae: 0.4077 - lr: 0.0010 - 66ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.18213 to 0.17747, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0770 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.4019 - lr: 0.0010 - 75ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.17747 to 0.17298, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0766 - val_loss: 0.1730 - val_mse: 0.1730 - val_mae: 0.3967 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.17298 to 0.16400, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0780 - val_loss: 0.1640 - val_mse: 0.1640 - val_mae: 0.3859 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss improved from 0.16400 to 0.16111, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0725 - val_loss: 0.1611 - val_mse: 0.1611 - val_mae: 0.3823 - lr: 0.0010 - 70ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss improved from 0.16111 to 0.15854, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0723 - val_loss: 0.1585 - val_mse: 0.1585 - val_mae: 0.3792 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.15854
10/10 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0693 - val_loss: 0.1615 - val_mse: 0.1615 - val_mae: 0.3830 - lr: 0.0010 - 56ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.15854
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0668 - val_loss: 0.1597 - val_mse: 0.1597 - val_mae: 0.3808 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss improved from 0.15854 to 0.15737, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0689 - val_loss: 0.1574 - val_mse: 0.1574 - val_mae: 0.3779 - lr: 0.0010 - 83ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.15737 to 0.15513, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0687 - val_loss: 0.1551 - val_mse: 0.1551 - val_mae: 0.3752 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss improved from 0.15513 to 0.14804, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0637 - val_loss: 0.1480 - val_mse: 0.1480 - val_mae: 0.3659 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss improved from 0.14804 to 0.13703, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0642 - val_loss: 0.1370 - val_mse: 0.1370 - val_mae: 0.3510 - lr: 0.0010 - 95ms/epoch - 9ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.13703 to 0.13130, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0634 - val_loss: 0.1313 - val_mse: 0.1313 - val_mae: 0.3430 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss improved from 0.13130 to 0.12866, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0639 - val_loss: 0.1287 - val_mse: 0.1287 - val_mae: 0.3392 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.12866
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0625 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3540 - lr: 0.0010 - 59ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.12866
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0651 - val_loss: 0.1404 - val_mse: 0.1404 - val_mae: 0.3553 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.12866
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0654 - val_loss: 0.1350 - val_mse: 0.1350 - val_mae: 0.3479 - lr: 0.0010 - 57ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.12866
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.1307 - val_mse: 0.1307 - val_mae: 0.3419 - lr: 0.0010 - 56ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00037: val_loss did not improve from 0.12866
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0613 - val_loss: 0.1295 - val_mse: 0.1295 - val_mae: 0.3401 - lr: 0.0010 - 53ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.12866
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0585 - val_loss: 0.1289 - val_mse: 0.1289 - val_mae: 0.3393 - lr: 1.0000e-04 - 52ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss improved from 0.12866 to 0.12793, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1279 - val_mse: 0.1279 - val_mae: 0.3380 - lr: 1.0000e-04 - 61ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss improved from 0.12793 to 0.12720, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0606 - val_loss: 0.1272 - val_mse: 0.1272 - val_mae: 0.3370 - lr: 1.0000e-04 - 69ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss improved from 0.12720 to 0.12683, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0600 - val_loss: 0.1268 - val_mse: 0.1268 - val_mae: 0.3365 - lr: 1.0000e-04 - 79ms/epoch - 8ms/step
Epoch 42/500
Epoch 00042: val_loss improved from 0.12683 to 0.12659, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0590 - val_loss: 0.1266 - val_mse: 0.1266 - val_mae: 0.3361 - lr: 1.0000e-04 - 81ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss improved from 0.12659 to 0.12608, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0614 - val_loss: 0.1261 - val_mse: 0.1261 - val_mae: 0.3354 - lr: 1.0000e-04 - 76ms/epoch - 8ms/step
Epoch 44/500
Epoch 00044: val_loss improved from 0.12608 to 0.12588, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.1259 - val_mse: 0.1259 - val_mae: 0.3351 - lr: 1.0000e-04 - 75ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss improved from 0.12588 to 0.12583, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.1258 - val_mse: 0.1258 - val_mae: 0.3351 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0591 - val_loss: 0.1262 - val_mse: 0.1262 - val_mae: 0.3356 - lr: 1.0000e-04 - 63ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0621 - val_loss: 0.1263 - val_mse: 0.1263 - val_mae: 0.3357 - lr: 1.0000e-04 - 58ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0598 - val_loss: 0.1263 - val_mse: 0.1263 - val_mae: 0.3357 - lr: 1.0000e-04 - 56ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00049: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0600 - val_loss: 0.1262 - val_mse: 0.1262 - val_mae: 0.3356 - lr: 1.0000e-04 - 53ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0571 - val_loss: 0.1263 - val_mse: 0.1263 - val_mae: 0.3357 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0587 - val_loss: 0.1263 - val_mse: 0.1263 - val_mae: 0.3357 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0597 - val_loss: 0.1262 - val_mse: 0.1262 - val_mae: 0.3356 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0583 - val_loss: 0.1261 - val_mse: 0.1261 - val_mae: 0.3354 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00054: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0576 - val_loss: 0.1261 - val_mse: 0.1261 - val_mae: 0.3354 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0592 - val_loss: 0.1261 - val_mse: 0.1261 - val_mae: 0.3354 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0568 - val_loss: 0.1260 - val_mse: 0.1260 - val_mae: 0.3353 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0587 - val_loss: 0.1259 - val_mse: 0.1259 - val_mae: 0.3352 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0604 - val_loss: 0.1259 - val_mse: 0.1259 - val_mae: 0.3352 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0601 - val_loss: 0.1259 - val_mse: 0.1259 - val_mae: 0.3351 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1258 - val_mse: 0.1258 - val_mae: 0.3351 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.1259 - val_mse: 0.1259 - val_mae: 0.3352 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0585 - val_loss: 0.1260 - val_mse: 0.1260 - val_mae: 0.3353 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1260 - val_mse: 0.1260 - val_mae: 0.3352 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.12583
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0588 - val_loss: 0.1258 - val_mse: 0.1258 - val_mae: 0.3351 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss improved from 0.12583 to 0.12572, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0618 - val_loss: 0.1257 - val_mse: 0.1257 - val_mae: 0.3349 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss improved from 0.12572 to 0.12559, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0569 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.12559
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.12559
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss improved from 0.12559 to 0.12556, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 70/500
Epoch 00070: val_loss improved from 0.12556 to 0.12552, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0595 - val_loss: 0.1255 - val_mse: 0.1255 - val_mae: 0.3346 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 71/500
Epoch 00071: val_loss improved from 0.12552 to 0.12539, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.1254 - val_mse: 0.1254 - val_mae: 0.3344 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 72/500
Epoch 00072: val_loss improved from 0.12539 to 0.12536, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0599 - val_loss: 0.1254 - val_mse: 0.1254 - val_mae: 0.3344 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 73/500
Epoch 00073: val_loss improved from 0.12536 to 0.12535, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0585 - val_loss: 0.1253 - val_mse: 0.1253 - val_mae: 0.3344 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 74/500
Epoch 00074: val_loss improved from 0.12535 to 0.12531, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0598 - val_loss: 0.1253 - val_mse: 0.1253 - val_mae: 0.3343 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 75/500
Epoch 00075: val_loss improved from 0.12531 to 0.12527, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.1253 - val_mse: 0.1253 - val_mae: 0.3342 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.1254 - val_mse: 0.1254 - val_mae: 0.3344 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.1255 - val_mse: 0.1255 - val_mae: 0.3346 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0588 - val_loss: 0.1257 - val_mse: 0.1257 - val_mae: 0.3348 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0572 - val_loss: 0.1257 - val_mse: 0.1257 - val_mae: 0.3349 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0603 - val_loss: 0.1257 - val_mse: 0.1257 - val_mae: 0.3349 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0609 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0585 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0567 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.1255 - val_mse: 0.1255 - val_mae: 0.3346 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 85/500
Epoch 00085: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0571 - val_loss: 0.1256 - val_mse: 0.1256 - val_mae: 0.3347 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.1255 - val_mse: 0.1255 - val_mae: 0.3346 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0590 - val_loss: 0.1254 - val_mse: 0.1254 - val_mae: 0.3345 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 88/500
Epoch 00088: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.1254 - val_mse: 0.1254 - val_mae: 0.3345 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 89/500
Epoch 00089: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0584 - val_loss: 0.1254 - val_mse: 0.1254 - val_mae: 0.3344 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 90/500
Epoch 00090: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0607 - val_loss: 0.1253 - val_mse: 0.1253 - val_mae: 0.3343 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 91/500
Epoch 00091: val_loss did not improve from 0.12527
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1253 - val_mse: 0.1253 - val_mae: 0.3343 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 92/500
Epoch 00092: val_loss improved from 0.12527 to 0.12522, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.1252 - val_mse: 0.1252 - val_mae: 0.3342 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 93/500
Epoch 00093: val_loss improved from 0.12522 to 0.12509, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.1251 - val_mse: 0.1251 - val_mae: 0.3340 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 94/500
Epoch 00094: val_loss improved from 0.12509 to 0.12495, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0570 - val_loss: 0.1250 - val_mse: 0.1250 - val_mae: 0.3338 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 95/500
Epoch 00095: val_loss improved from 0.12495 to 0.12486, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.1249 - val_mse: 0.1249 - val_mae: 0.3337 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 96/500
Epoch 00096: val_loss did not improve from 0.12486
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.1249 - val_mse: 0.1249 - val_mae: 0.3337 - lr: 1.0000e-05 - 49ms/epoch - 5ms/step
Epoch 97/500
Epoch 00097: val_loss did not improve from 0.12486
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0578 - val_loss: 0.1249 - val_mse: 0.1249 - val_mae: 0.3338 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 98/500
Epoch 00098: val_loss did not improve from 0.12486
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0587 - val_loss: 0.1249 - val_mse: 0.1249 - val_mae: 0.3337 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 99/500
Epoch 00099: val_loss did not improve from 0.12486
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0592 - val_loss: 0.1249 - val_mse: 0.1249 - val_mae: 0.3337 - lr: 1.0000e-05 - 50ms/epoch - 5ms/step
Epoch 100/500
Epoch 00100: val_loss improved from 0.12486 to 0.12483, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0570 - val_loss: 0.1248 - val_mse: 0.1248 - val_mae: 0.3336 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 101/500
Epoch 00101: val_loss improved from 0.12483 to 0.12476, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.1248 - val_mse: 0.1248 - val_mae: 0.3335 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 102/500
Epoch 00102: val_loss improved from 0.12476 to 0.12466, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.1247 - val_mse: 0.1247 - val_mae: 0.3334 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 103/500
Epoch 00103: val_loss improved from 0.12466 to 0.12464, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.1246 - val_mse: 0.1246 - val_mae: 0.3334 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 104/500
Epoch 00104: val_loss did not improve from 0.12464
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.1247 - val_mse: 0.1247 - val_mae: 0.3335 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 105/500
Epoch 00105: val_loss did not improve from 0.12464
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.1246 - val_mse: 0.1246 - val_mae: 0.3334 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 106/500
Epoch 00106: val_loss did not improve from 0.12464
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0591 - val_loss: 0.1247 - val_mse: 0.1247 - val_mae: 0.3334 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 107/500
Epoch 00107: val_loss improved from 0.12464 to 0.12459, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.1246 - val_mse: 0.1246 - val_mae: 0.3333 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 108/500
Epoch 00108: val_loss improved from 0.12459 to 0.12445, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0604 - val_loss: 0.1245 - val_mse: 0.1245 - val_mae: 0.3331 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 109/500
Epoch 00109: val_loss improved from 0.12445 to 0.12443, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3330 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 110/500
Epoch 00110: val_loss improved from 0.12443 to 0.12439, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3330 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 111/500
Epoch 00111: val_loss improved from 0.12439 to 0.12433, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3329 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 112/500
Epoch 00112: val_loss improved from 0.12433 to 0.12424, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0593 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3328 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 113/500
Epoch 00113: val_loss improved from 0.12424 to 0.12417, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3327 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 114/500
Epoch 00114: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3327 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 115/500
Epoch 00115: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3327 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 116/500
Epoch 00116: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3328 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 117/500
Epoch 00117: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0574 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3329 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 118/500
Epoch 00118: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0605 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3329 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 119/500
Epoch 00119: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0581 - val_loss: 0.1245 - val_mse: 0.1245 - val_mae: 0.3331 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 120/500
Epoch 00120: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0569 - val_loss: 0.1246 - val_mse: 0.1246 - val_mae: 0.3332 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 121/500
Epoch 00121: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.1246 - val_mse: 0.1246 - val_mae: 0.3332 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 122/500
Epoch 00122: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0570 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3330 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 123/500
Epoch 00123: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3331 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 124/500
Epoch 00124: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0595 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3330 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 125/500
Epoch 00125: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0581 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3328 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 126/500
Epoch 00126: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0597 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3327 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 127/500
Epoch 00127: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3329 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 128/500
Epoch 00128: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0585 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3330 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 129/500
Epoch 00129: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0594 - val_loss: 0.1244 - val_mse: 0.1244 - val_mae: 0.3330 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 130/500
Epoch 00130: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0585 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3329 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 131/500
Epoch 00131: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3328 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 132/500
Epoch 00132: val_loss did not improve from 0.12417
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3328 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 133/500
Epoch 00133: val_loss improved from 0.12417 to 0.12407, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.1241 - val_mse: 0.1241 - val_mae: 0.3326 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 134/500
Epoch 00134: val_loss improved from 0.12407 to 0.12394, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3324 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 135/500
Epoch 00135: val_loss improved from 0.12394 to 0.12385, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0541 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3322 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 136/500
Epoch 00136: val_loss improved from 0.12385 to 0.12374, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0568 - val_loss: 0.1237 - val_mse: 0.1237 - val_mae: 0.3321 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 137/500
Epoch 00137: val_loss improved from 0.12374 to 0.12370, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0592 - val_loss: 0.1237 - val_mse: 0.1237 - val_mae: 0.3320 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 138/500
Epoch 00138: val_loss improved from 0.12370 to 0.12370, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0591 - val_loss: 0.1237 - val_mse: 0.1237 - val_mae: 0.3320 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 139/500
Epoch 00139: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0572 - val_loss: 0.1237 - val_mse: 0.1237 - val_mae: 0.3320 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 140/500
Epoch 00140: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3321 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 141/500
Epoch 00141: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3321 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 142/500
Epoch 00142: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3323 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 143/500
Epoch 00143: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3322 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 144/500
Epoch 00144: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3321 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 145/500
Epoch 00145: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3322 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 146/500
Epoch 00146: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0592 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3323 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 147/500
Epoch 00147: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1240 - val_mse: 0.1240 - val_mae: 0.3324 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 148/500
Epoch 00148: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0563 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3323 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 149/500
Epoch 00149: val_loss did not improve from 0.12370
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3321 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 150/500
Epoch 00150: val_loss improved from 0.12370 to 0.12368, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.1237 - val_mse: 0.1237 - val_mae: 0.3320 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 151/500
Epoch 00151: val_loss improved from 0.12368 to 0.12361, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0567 - val_loss: 0.1236 - val_mse: 0.1236 - val_mae: 0.3319 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 152/500
Epoch 00152: val_loss improved from 0.12361 to 0.12361, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.1236 - val_mse: 0.1236 - val_mae: 0.3319 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 153/500
Epoch 00153: val_loss improved from 0.12361 to 0.12350, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0590 - val_loss: 0.1235 - val_mse: 0.1235 - val_mae: 0.3317 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 154/500
Epoch 00154: val_loss improved from 0.12350 to 0.12342, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.1234 - val_mse: 0.1234 - val_mae: 0.3316 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 155/500
Epoch 00155: val_loss did not improve from 0.12342
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.1236 - val_mse: 0.1236 - val_mae: 0.3319 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 156/500
Epoch 00156: val_loss did not improve from 0.12342
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0581 - val_loss: 0.1236 - val_mse: 0.1236 - val_mae: 0.3318 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 157/500
Epoch 00157: val_loss improved from 0.12342 to 0.12331, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.1233 - val_mse: 0.1233 - val_mae: 0.3315 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 158/500
Epoch 00158: val_loss improved from 0.12331 to 0.12310, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0615 - val_loss: 0.1231 - val_mse: 0.1231 - val_mae: 0.3312 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 159/500
Epoch 00159: val_loss did not improve from 0.12310
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0553 - val_loss: 0.1232 - val_mse: 0.1232 - val_mae: 0.3313 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 160/500
Epoch 00160: val_loss did not improve from 0.12310
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.1233 - val_mse: 0.1233 - val_mae: 0.3314 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 161/500
Epoch 00161: val_loss did not improve from 0.12310
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0590 - val_loss: 0.1232 - val_mse: 0.1232 - val_mae: 0.3314 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 162/500
Epoch 00162: val_loss did not improve from 0.12310
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0597 - val_loss: 0.1233 - val_mse: 0.1233 - val_mae: 0.3315 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 163/500
Epoch 00163: val_loss did not improve from 0.12310
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.1232 - val_mse: 0.1232 - val_mae: 0.3313 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 164/500
Epoch 00164: val_loss did not improve from 0.12310
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.1232 - val_mse: 0.1232 - val_mae: 0.3313 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 165/500
Epoch 00165: val_loss improved from 0.12310 to 0.12306, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.1231 - val_mse: 0.1231 - val_mae: 0.3311 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 166/500
Epoch 00166: val_loss improved from 0.12306 to 0.12301, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.1230 - val_mse: 0.1230 - val_mae: 0.3310 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 167/500
Epoch 00167: val_loss improved from 0.12301 to 0.12298, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0589 - val_loss: 0.1230 - val_mse: 0.1230 - val_mae: 0.3310 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 168/500
Epoch 00168: val_loss did not improve from 0.12298
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0559 - val_loss: 0.1231 - val_mse: 0.1231 - val_mae: 0.3312 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 169/500
Epoch 00169: val_loss did not improve from 0.12298
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0576 - val_loss: 0.1231 - val_mse: 0.1231 - val_mae: 0.3311 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 170/500
Epoch 00170: val_loss improved from 0.12298 to 0.12291, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0554 - val_loss: 0.1229 - val_mse: 0.1229 - val_mae: 0.3309 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 171/500
Epoch 00171: val_loss did not improve from 0.12291
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0590 - val_loss: 0.1229 - val_mse: 0.1229 - val_mae: 0.3309 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 172/500
Epoch 00172: val_loss did not improve from 0.12291
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0569 - val_loss: 0.1231 - val_mse: 0.1231 - val_mae: 0.3311 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 173/500
Epoch 00173: val_loss did not improve from 0.12291
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0585 - val_loss: 0.1231 - val_mse: 0.1231 - val_mae: 0.3312 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 174/500
Epoch 00174: val_loss did not improve from 0.12291
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.1232 - val_mse: 0.1232 - val_mae: 0.3313 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 175/500
Epoch 00175: val_loss did not improve from 0.12291
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.1230 - val_mse: 0.1230 - val_mae: 0.3310 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 176/500
Epoch 00176: val_loss improved from 0.12291 to 0.12278, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.1228 - val_mse: 0.1228 - val_mae: 0.3307 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 177/500
Epoch 00177: val_loss improved from 0.12278 to 0.12254, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0603 - val_loss: 0.1225 - val_mse: 0.1225 - val_mae: 0.3304 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 178/500
Epoch 00178: val_loss improved from 0.12254 to 0.12232, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0551 - val_loss: 0.1223 - val_mse: 0.1223 - val_mae: 0.3300 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 179/500
Epoch 00179: val_loss improved from 0.12232 to 0.12218, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.1222 - val_mse: 0.1222 - val_mae: 0.3298 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 180/500
Epoch 00180: val_loss improved from 0.12218 to 0.12212, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0579 - val_loss: 0.1221 - val_mse: 0.1221 - val_mae: 0.3297 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 181/500
Epoch 00181: val_loss improved from 0.12212 to 0.12207, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0561 - val_loss: 0.1221 - val_mse: 0.1221 - val_mae: 0.3297 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 182/500
Epoch 00182: val_loss improved from 0.12207 to 0.12206, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0586 - val_loss: 0.1221 - val_mse: 0.1221 - val_mae: 0.3297 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 183/500
Epoch 00183: val_loss did not improve from 0.12206
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0579 - val_loss: 0.1221 - val_mse: 0.1221 - val_mae: 0.3297 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 184/500
Epoch 00184: val_loss did not improve from 0.12206
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0591 - val_loss: 0.1223 - val_mse: 0.1223 - val_mae: 0.3301 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 185/500
Epoch 00185: val_loss did not improve from 0.12206
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1223 - val_mse: 0.1223 - val_mae: 0.3301 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 186/500
Epoch 00186: val_loss did not improve from 0.12206
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.1222 - val_mse: 0.1222 - val_mae: 0.3299 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 187/500
Epoch 00187: val_loss did not improve from 0.12206
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.1221 - val_mse: 0.1221 - val_mae: 0.3297 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 188/500
Epoch 00188: val_loss improved from 0.12206 to 0.12187, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0570 - val_loss: 0.1219 - val_mse: 0.1219 - val_mae: 0.3294 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 189/500
Epoch 00189: val_loss improved from 0.12187 to 0.12157, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.1216 - val_mse: 0.1216 - val_mae: 0.3290 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 190/500
Epoch 00190: val_loss improved from 0.12157 to 0.12142, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3287 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 191/500
Epoch 00191: val_loss improved from 0.12142 to 0.12137, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3287 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 192/500
Epoch 00192: val_loss did not improve from 0.12137
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0571 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3287 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 193/500
Epoch 00193: val_loss did not improve from 0.12137
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0580 - val_loss: 0.1215 - val_mse: 0.1215 - val_mae: 0.3288 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 194/500
Epoch 00194: val_loss did not improve from 0.12137
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0552 - val_loss: 0.1216 - val_mse: 0.1216 - val_mae: 0.3290 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 195/500
Epoch 00195: val_loss did not improve from 0.12137
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0620 - val_loss: 0.1217 - val_mse: 0.1217 - val_mae: 0.3291 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 196/500
Epoch 00196: val_loss did not improve from 0.12137
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.1216 - val_mse: 0.1216 - val_mae: 0.3290 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 197/500
Epoch 00197: val_loss did not improve from 0.12137
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3288 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 198/500
Epoch 00198: val_loss improved from 0.12137 to 0.12123, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.1212 - val_mse: 0.1212 - val_mae: 0.3285 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 199/500
Epoch 00199: val_loss did not improve from 0.12123
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.1213 - val_mse: 0.1213 - val_mae: 0.3286 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 200/500
Epoch 00200: val_loss improved from 0.12123 to 0.12120, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0570 - val_loss: 0.1212 - val_mse: 0.1212 - val_mae: 0.3284 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 201/500
Epoch 00201: val_loss improved from 0.12120 to 0.12105, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0568 - val_loss: 0.1210 - val_mse: 0.1210 - val_mae: 0.3282 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 202/500
Epoch 00202: val_loss improved from 0.12105 to 0.12095, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.1210 - val_mse: 0.1210 - val_mae: 0.3281 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 203/500
Epoch 00203: val_loss improved from 0.12095 to 0.12078, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0569 - val_loss: 0.1208 - val_mse: 0.1208 - val_mae: 0.3278 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 204/500
Epoch 00204: val_loss improved from 0.12078 to 0.12059, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3275 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 205/500
Epoch 00205: val_loss improved from 0.12059 to 0.12034, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0575 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3272 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 206/500
Epoch 00206: val_loss improved from 0.12034 to 0.12019, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3270 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 207/500
Epoch 00207: val_loss improved from 0.12019 to 0.12009, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0559 - val_loss: 0.1201 - val_mse: 0.1201 - val_mae: 0.3268 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 208/500
Epoch 00208: val_loss did not improve from 0.12009
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0577 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3272 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 209/500
Epoch 00209: val_loss did not improve from 0.12009
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3274 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 210/500
Epoch 00210: val_loss did not improve from 0.12009
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3273 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 211/500
Epoch 00211: val_loss did not improve from 0.12009
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0570 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3270 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 212/500
Epoch 00212: val_loss did not improve from 0.12009
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3270 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 213/500
Epoch 00213: val_loss did not improve from 0.12009
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3269 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 214/500
Epoch 00214: val_loss improved from 0.12009 to 0.12006, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.1201 - val_mse: 0.1201 - val_mae: 0.3268 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 215/500
Epoch 00215: val_loss improved from 0.12006 to 0.11989, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.1199 - val_mse: 0.1199 - val_mae: 0.3265 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 216/500
Epoch 00216: val_loss improved from 0.11989 to 0.11976, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0584 - val_loss: 0.1198 - val_mse: 0.1198 - val_mae: 0.3264 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 217/500
Epoch 00217: val_loss improved from 0.11976 to 0.11955, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0584 - val_loss: 0.1196 - val_mse: 0.1196 - val_mae: 0.3260 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 218/500
Epoch 00218: val_loss improved from 0.11955 to 0.11942, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0572 - val_loss: 0.1194 - val_mse: 0.1194 - val_mae: 0.3259 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 219/500
Epoch 00219: val_loss improved from 0.11942 to 0.11933, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0555 - val_loss: 0.1193 - val_mse: 0.1193 - val_mae: 0.3257 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 220/500
Epoch 00220: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.1193 - val_mse: 0.1193 - val_mae: 0.3258 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 221/500
Epoch 00221: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0567 - val_loss: 0.1194 - val_mse: 0.1194 - val_mae: 0.3259 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 222/500
Epoch 00222: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0583 - val_loss: 0.1196 - val_mse: 0.1196 - val_mae: 0.3262 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 223/500
Epoch 00223: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0577 - val_loss: 0.1198 - val_mse: 0.1198 - val_mae: 0.3265 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 224/500
Epoch 00224: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3267 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 225/500
Epoch 00225: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0574 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3267 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 226/500
Epoch 00226: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.1198 - val_mse: 0.1198 - val_mae: 0.3265 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 227/500
Epoch 00227: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.1201 - val_mse: 0.1201 - val_mae: 0.3269 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 228/500
Epoch 00228: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0574 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3275 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 229/500
Epoch 00229: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3275 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 230/500
Epoch 00230: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3273 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 231/500
Epoch 00231: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0582 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3273 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 232/500
Epoch 00232: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0603 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3272 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 233/500
Epoch 00233: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0589 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3274 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 234/500
Epoch 00234: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0594 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3278 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 235/500
Epoch 00235: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0581 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3277 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 236/500
Epoch 00236: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3277 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 237/500
Epoch 00237: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0585 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3271 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 238/500
Epoch 00238: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3268 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 239/500
Epoch 00239: val_loss did not improve from 0.11933
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.1196 - val_mse: 0.1196 - val_mae: 0.3261 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 240/500
Epoch 00240: val_loss improved from 0.11933 to 0.11929, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0574 - val_loss: 0.1193 - val_mse: 0.1193 - val_mae: 0.3257 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 241/500
Epoch 00241: val_loss did not improve from 0.11929
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.1193 - val_mse: 0.1193 - val_mae: 0.3257 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 242/500
Epoch 00242: val_loss did not improve from 0.11929
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0575 - val_loss: 0.1195 - val_mse: 0.1195 - val_mae: 0.3260 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 243/500
Epoch 00243: val_loss did not improve from 0.11929
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0594 - val_loss: 0.1196 - val_mse: 0.1196 - val_mae: 0.3261 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 244/500
Epoch 00244: val_loss did not improve from 0.11929
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.1196 - val_mse: 0.1196 - val_mae: 0.3262 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 245/500
Epoch 00245: val_loss did not improve from 0.11929
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.1193 - val_mse: 0.1193 - val_mae: 0.3258 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 246/500
Epoch 00246: val_loss improved from 0.11929 to 0.11925, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0554 - val_loss: 0.1192 - val_mse: 0.1192 - val_mae: 0.3256 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 247/500
Epoch 00247: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.1194 - val_mse: 0.1194 - val_mae: 0.3259 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 248/500
Epoch 00248: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.1195 - val_mse: 0.1195 - val_mae: 0.3260 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 249/500
Epoch 00249: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0541 - val_loss: 0.1196 - val_mse: 0.1196 - val_mae: 0.3262 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 250/500
Epoch 00250: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0580 - val_loss: 0.1198 - val_mse: 0.1198 - val_mae: 0.3265 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 251/500
Epoch 00251: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0563 - val_loss: 0.1199 - val_mse: 0.1199 - val_mae: 0.3266 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 252/500
Epoch 00252: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0569 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3267 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 253/500
Epoch 00253: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0589 - val_loss: 0.1201 - val_mse: 0.1201 - val_mae: 0.3268 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 254/500
Epoch 00254: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3271 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 255/500
Epoch 00255: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3274 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 256/500
Epoch 00256: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0574 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3277 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 257/500
Epoch 00257: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0571 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3277 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 258/500
Epoch 00258: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0563 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3274 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 259/500
Epoch 00259: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3271 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 260/500
Epoch 00260: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.1201 - val_mse: 0.1201 - val_mae: 0.3268 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 261/500
Epoch 00261: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0559 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3270 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 262/500
Epoch 00262: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3274 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 263/500
Epoch 00263: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3274 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 264/500
Epoch 00264: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0590 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3273 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 265/500
Epoch 00265: val_loss did not improve from 0.11925
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0583 - val_loss: 0.1199 - val_mse: 0.1199 - val_mae: 0.3265 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 266/500
Epoch 00266: val_loss improved from 0.11925 to 0.11918, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0572 - val_loss: 0.1192 - val_mse: 0.1192 - val_mae: 0.3255 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 267/500
Epoch 00267: val_loss improved from 0.11918 to 0.11889, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.1189 - val_mse: 0.1189 - val_mae: 0.3251 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 268/500
Epoch 00268: val_loss improved from 0.11889 to 0.11851, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.1185 - val_mse: 0.1185 - val_mae: 0.3246 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 269/500
Epoch 00269: val_loss improved from 0.11851 to 0.11827, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0580 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.3242 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 270/500
Epoch 00270: val_loss improved from 0.11827 to 0.11819, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0582 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.3241 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 271/500
Epoch 00271: val_loss improved from 0.11819 to 0.11806, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.1181 - val_mse: 0.1181 - val_mae: 0.3239 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 272/500
Epoch 00272: val_loss improved from 0.11806 to 0.11805, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0578 - val_loss: 0.1180 - val_mse: 0.1180 - val_mae: 0.3239 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 273/500
Epoch 00273: val_loss improved from 0.11805 to 0.11790, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0584 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.3237 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 274/500
Epoch 00274: val_loss did not improve from 0.11790
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.3237 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 275/500
Epoch 00275: val_loss improved from 0.11790 to 0.11789, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0553 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.3237 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 276/500
Epoch 00276: val_loss did not improve from 0.11789
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.1180 - val_mse: 0.1180 - val_mae: 0.3239 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 277/500
Epoch 00277: val_loss did not improve from 0.11789
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0577 - val_loss: 0.1181 - val_mse: 0.1181 - val_mae: 0.3240 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 278/500
Epoch 00278: val_loss did not improve from 0.11789
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.3241 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 279/500
Epoch 00279: val_loss did not improve from 0.11789
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0555 - val_loss: 0.1181 - val_mse: 0.1181 - val_mae: 0.3239 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 280/500
Epoch 00280: val_loss did not improve from 0.11789
10/10 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0593 - val_loss: 0.1181 - val_mse: 0.1181 - val_mae: 0.3240 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 281/500
Epoch 00281: val_loss improved from 0.11789 to 0.11788, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.3237 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 282/500
Epoch 00282: val_loss improved from 0.11788 to 0.11768, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.1177 - val_mse: 0.1177 - val_mae: 0.3234 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 283/500
Epoch 00283: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.1178 - val_mse: 0.1178 - val_mae: 0.3236 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 284/500
Epoch 00284: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0577 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.3241 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 285/500
Epoch 00285: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0562 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.3242 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 286/500
Epoch 00286: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0575 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.3243 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 287/500
Epoch 00287: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0544 - val_loss: 0.1185 - val_mse: 0.1185 - val_mae: 0.3246 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 288/500
Epoch 00288: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0558 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.3247 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 289/500
Epoch 00289: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.3249 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 290/500
Epoch 00290: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.3248 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 291/500
Epoch 00291: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0557 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.3248 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 292/500
Epoch 00292: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.3243 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 293/500
Epoch 00293: val_loss did not improve from 0.11768
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0573 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.3236 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 294/500
Epoch 00294: val_loss improved from 0.11768 to 0.11724, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.1172 - val_mse: 0.1172 - val_mae: 0.3227 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 295/500
Epoch 00295: val_loss did not improve from 0.11724
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0559 - val_loss: 0.1174 - val_mse: 0.1174 - val_mae: 0.3229 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 296/500
Epoch 00296: val_loss did not improve from 0.11724
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0555 - val_loss: 0.1173 - val_mse: 0.1173 - val_mae: 0.3229 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 297/500
Epoch 00297: val_loss did not improve from 0.11724
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0562 - val_loss: 0.1173 - val_mse: 0.1173 - val_mae: 0.3228 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 298/500
Epoch 00298: val_loss improved from 0.11724 to 0.11713, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0555 - val_loss: 0.1171 - val_mse: 0.1171 - val_mae: 0.3226 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 299/500
Epoch 00299: val_loss improved from 0.11713 to 0.11709, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0584 - val_loss: 0.1171 - val_mse: 0.1171 - val_mae: 0.3225 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 300/500
Epoch 00300: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0575 - val_loss: 0.1172 - val_mse: 0.1172 - val_mae: 0.3227 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 301/500
Epoch 00301: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.1172 - val_mse: 0.1172 - val_mae: 0.3227 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 302/500
Epoch 00302: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.1174 - val_mse: 0.1174 - val_mae: 0.3230 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 303/500
Epoch 00303: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.1176 - val_mse: 0.1176 - val_mae: 0.3233 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 304/500
Epoch 00304: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.1180 - val_mse: 0.1180 - val_mae: 0.3239 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 305/500
Epoch 00305: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.3244 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 306/500
Epoch 00306: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.3244 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 307/500
Epoch 00307: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.3242 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 308/500
Epoch 00308: val_loss did not improve from 0.11709
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.1173 - val_mse: 0.1173 - val_mae: 0.3229 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 309/500
Epoch 00309: val_loss improved from 0.11709 to 0.11635, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0581 - val_loss: 0.1164 - val_mse: 0.1164 - val_mae: 0.3215 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 310/500
Epoch 00310: val_loss improved from 0.11635 to 0.11572, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0547 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.3205 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 311/500
Epoch 00311: val_loss improved from 0.11572 to 0.11539, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.1154 - val_mse: 0.1154 - val_mae: 0.3201 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 312/500
Epoch 00312: val_loss improved from 0.11539 to 0.11520, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0555 - val_loss: 0.1152 - val_mse: 0.1152 - val_mae: 0.3198 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 313/500
Epoch 00313: val_loss improved from 0.11520 to 0.11490, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0596 - val_loss: 0.1149 - val_mse: 0.1149 - val_mae: 0.3193 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 314/500
Epoch 00314: val_loss improved from 0.11490 to 0.11480, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.1148 - val_mse: 0.1148 - val_mae: 0.3192 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 315/500
Epoch 00315: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.1149 - val_mse: 0.1149 - val_mae: 0.3194 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 316/500
Epoch 00316: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.1153 - val_mse: 0.1153 - val_mae: 0.3198 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 317/500
Epoch 00317: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.1156 - val_mse: 0.1156 - val_mae: 0.3203 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 318/500
Epoch 00318: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0551 - val_loss: 0.1159 - val_mse: 0.1159 - val_mae: 0.3207 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 319/500
Epoch 00319: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.1160 - val_mse: 0.1160 - val_mae: 0.3209 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 320/500
Epoch 00320: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0555 - val_loss: 0.1156 - val_mse: 0.1156 - val_mae: 0.3203 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 321/500
Epoch 00321: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0556 - val_loss: 0.1154 - val_mse: 0.1154 - val_mae: 0.3201 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 322/500
Epoch 00322: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0556 - val_loss: 0.1154 - val_mse: 0.1154 - val_mae: 0.3200 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 323/500
Epoch 00323: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.1155 - val_mse: 0.1155 - val_mae: 0.3202 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 324/500
Epoch 00324: val_loss did not improve from 0.11480
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0552 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.3197 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 325/500
Epoch 00325: val_loss improved from 0.11480 to 0.11477, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1148 - val_mse: 0.1148 - val_mae: 0.3191 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 326/500
Epoch 00326: val_loss improved from 0.11477 to 0.11473, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.1147 - val_mse: 0.1147 - val_mae: 0.3191 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 327/500
Epoch 00327: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0544 - val_loss: 0.1148 - val_mse: 0.1148 - val_mae: 0.3191 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 328/500
Epoch 00328: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0545 - val_loss: 0.1150 - val_mse: 0.1150 - val_mae: 0.3195 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 329/500
Epoch 00329: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0571 - val_loss: 0.1150 - val_mse: 0.1150 - val_mae: 0.3194 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 330/500
Epoch 00330: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.3196 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 331/500
Epoch 00331: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.3196 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 332/500
Epoch 00332: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.1150 - val_mse: 0.1150 - val_mae: 0.3195 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 333/500
Epoch 00333: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.1149 - val_mse: 0.1149 - val_mae: 0.3195 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 334/500
Epoch 00334: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0521 - val_loss: 0.1150 - val_mse: 0.1150 - val_mae: 0.3195 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 335/500
Epoch 00335: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.1154 - val_mse: 0.1154 - val_mae: 0.3202 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 336/500
Epoch 00336: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0546 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.3205 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 337/500
Epoch 00337: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.1155 - val_mse: 0.1155 - val_mae: 0.3203 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 338/500
Epoch 00338: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0533 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.3206 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 339/500
Epoch 00339: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.1159 - val_mse: 0.1159 - val_mae: 0.3208 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 340/500
Epoch 00340: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.1158 - val_mse: 0.1158 - val_mae: 0.3208 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 341/500
Epoch 00341: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0550 - val_loss: 0.1158 - val_mse: 0.1158 - val_mae: 0.3208 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 342/500
Epoch 00342: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0538 - val_loss: 0.1160 - val_mse: 0.1160 - val_mae: 0.3211 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 343/500
Epoch 00343: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.1160 - val_mse: 0.1160 - val_mae: 0.3210 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 344/500
Epoch 00344: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.3205 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 345/500
Epoch 00345: val_loss did not improve from 0.11473
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.3198 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 346/500
Epoch 00346: val_loss improved from 0.11473 to 0.11468, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0565 - val_loss: 0.1147 - val_mse: 0.1147 - val_mae: 0.3191 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 347/500
Epoch 00347: val_loss improved from 0.11468 to 0.11448, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0543 - val_loss: 0.1145 - val_mse: 0.1145 - val_mae: 0.3188 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 348/500
Epoch 00348: val_loss improved from 0.11448 to 0.11417, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.1142 - val_mse: 0.1142 - val_mae: 0.3184 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 349/500
Epoch 00349: val_loss improved from 0.11417 to 0.11403, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0580 - val_loss: 0.1140 - val_mse: 0.1140 - val_mae: 0.3182 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 350/500
Epoch 00350: val_loss improved from 0.11403 to 0.11397, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.1140 - val_mse: 0.1140 - val_mae: 0.3181 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 351/500
Epoch 00351: val_loss improved from 0.11397 to 0.11370, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0565 - val_loss: 0.1137 - val_mse: 0.1137 - val_mae: 0.3177 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 352/500
Epoch 00352: val_loss improved from 0.11370 to 0.11353, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.1135 - val_mse: 0.1135 - val_mae: 0.3174 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 353/500
Epoch 00353: val_loss improved from 0.11353 to 0.11325, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0552 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.3170 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 354/500
Epoch 00354: val_loss improved from 0.11325 to 0.11309, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.1131 - val_mse: 0.1131 - val_mae: 0.3167 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 355/500
Epoch 00355: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.3171 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 356/500
Epoch 00356: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0538 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.3171 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 357/500
Epoch 00357: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.1135 - val_mse: 0.1135 - val_mae: 0.3173 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 358/500
Epoch 00358: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0559 - val_loss: 0.1134 - val_mse: 0.1134 - val_mae: 0.3173 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 359/500
Epoch 00359: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.3171 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 360/500
Epoch 00360: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.3170 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 361/500
Epoch 00361: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.1134 - val_mse: 0.1134 - val_mae: 0.3172 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 362/500
Epoch 00362: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0548 - val_loss: 0.1134 - val_mse: 0.1134 - val_mae: 0.3172 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 363/500
Epoch 00363: val_loss did not improve from 0.11309
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.3171 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 364/500
Epoch 00364: val_loss improved from 0.11309 to 0.11304, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0546 - val_loss: 0.1130 - val_mse: 0.1130 - val_mae: 0.3167 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 365/500
Epoch 00365: val_loss improved from 0.11304 to 0.11297, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.1130 - val_mse: 0.1130 - val_mae: 0.3166 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 366/500
Epoch 00366: val_loss did not improve from 0.11297
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.1131 - val_mse: 0.1131 - val_mae: 0.3168 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 367/500
Epoch 00367: val_loss improved from 0.11297 to 0.11296, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.1130 - val_mse: 0.1130 - val_mae: 0.3166 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 368/500
Epoch 00368: val_loss improved from 0.11296 to 0.11281, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.1128 - val_mse: 0.1128 - val_mae: 0.3163 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 369/500
Epoch 00369: val_loss improved from 0.11281 to 0.11238, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.1124 - val_mse: 0.1124 - val_mae: 0.3157 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 370/500
Epoch 00370: val_loss did not improve from 0.11238
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0559 - val_loss: 0.1124 - val_mse: 0.1124 - val_mae: 0.3157 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 371/500
Epoch 00371: val_loss improved from 0.11238 to 0.11229, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0551 - val_loss: 0.1123 - val_mse: 0.1123 - val_mae: 0.3155 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 372/500
Epoch 00372: val_loss did not improve from 0.11229
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0565 - val_loss: 0.1124 - val_mse: 0.1124 - val_mae: 0.3157 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 373/500
Epoch 00373: val_loss did not improve from 0.11229
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0530 - val_loss: 0.1125 - val_mse: 0.1125 - val_mae: 0.3159 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 374/500
Epoch 00374: val_loss improved from 0.11229 to 0.11211, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.1121 - val_mse: 0.1121 - val_mae: 0.3153 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 375/500
Epoch 00375: val_loss improved from 0.11211 to 0.11191, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.1119 - val_mse: 0.1119 - val_mae: 0.3150 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 376/500
Epoch 00376: val_loss improved from 0.11191 to 0.11172, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.1117 - val_mse: 0.1117 - val_mae: 0.3147 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 377/500
Epoch 00377: val_loss improved from 0.11172 to 0.11165, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.3146 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 378/500
Epoch 00378: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0598 - val_loss: 0.1118 - val_mse: 0.1118 - val_mae: 0.3148 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 379/500
Epoch 00379: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.1119 - val_mse: 0.1119 - val_mae: 0.3150 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 380/500
Epoch 00380: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0554 - val_loss: 0.1119 - val_mse: 0.1119 - val_mae: 0.3150 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 381/500
Epoch 00381: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.1123 - val_mse: 0.1123 - val_mae: 0.3157 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 382/500
Epoch 00382: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0555 - val_loss: 0.1125 - val_mse: 0.1125 - val_mae: 0.3159 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 383/500
Epoch 00383: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.1122 - val_mse: 0.1122 - val_mae: 0.3155 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 384/500
Epoch 00384: val_loss did not improve from 0.11165
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0558 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.3146 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 385/500
Epoch 00385: val_loss improved from 0.11165 to 0.11138, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.1114 - val_mse: 0.1114 - val_mae: 0.3143 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 386/500
Epoch 00386: val_loss improved from 0.11138 to 0.11095, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3136 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 387/500
Epoch 00387: val_loss did not improve from 0.11095
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0546 - val_loss: 0.1110 - val_mse: 0.1110 - val_mae: 0.3136 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 388/500
Epoch 00388: val_loss improved from 0.11095 to 0.11090, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0579 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3135 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 389/500
Epoch 00389: val_loss improved from 0.11090 to 0.11061, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.1106 - val_mse: 0.1106 - val_mae: 0.3131 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 390/500
Epoch 00390: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0551 - val_loss: 0.1106 - val_mse: 0.1106 - val_mae: 0.3131 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 391/500
Epoch 00391: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0543 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3135 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 392/500
Epoch 00392: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.1108 - val_mse: 0.1108 - val_mae: 0.3134 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 393/500
Epoch 00393: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0566 - val_loss: 0.1108 - val_mse: 0.1108 - val_mae: 0.3134 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 394/500
Epoch 00394: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.1111 - val_mse: 0.1111 - val_mae: 0.3139 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 395/500
Epoch 00395: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.1112 - val_mse: 0.1112 - val_mae: 0.3141 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 396/500
Epoch 00396: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.1115 - val_mse: 0.1115 - val_mae: 0.3144 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 397/500
Epoch 00397: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.3146 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 398/500
Epoch 00398: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0542 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.3146 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 399/500
Epoch 00399: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.1113 - val_mse: 0.1113 - val_mae: 0.3141 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 400/500
Epoch 00400: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.1112 - val_mse: 0.1112 - val_mae: 0.3140 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 401/500
Epoch 00401: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0543 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3135 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 402/500
Epoch 00402: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0526 - val_loss: 0.1108 - val_mse: 0.1108 - val_mae: 0.3134 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 403/500
Epoch 00403: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.1108 - val_mse: 0.1108 - val_mae: 0.3134 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 404/500
Epoch 00404: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.1111 - val_mse: 0.1111 - val_mae: 0.3138 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 405/500
Epoch 00405: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0557 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.3146 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 406/500
Epoch 00406: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.1120 - val_mse: 0.1120 - val_mae: 0.3152 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 407/500
Epoch 00407: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.1117 - val_mse: 0.1117 - val_mae: 0.3147 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 408/500
Epoch 00408: val_loss did not improve from 0.11061
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.1111 - val_mse: 0.1111 - val_mae: 0.3139 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 409/500
Epoch 00409: val_loss improved from 0.11061 to 0.11055, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.1105 - val_mse: 0.1105 - val_mae: 0.3130 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 410/500
Epoch 00410: val_loss improved from 0.11055 to 0.11054, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0559 - val_loss: 0.1105 - val_mse: 0.1105 - val_mae: 0.3130 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 411/500
Epoch 00411: val_loss did not improve from 0.11054
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0542 - val_loss: 0.1107 - val_mse: 0.1107 - val_mae: 0.3133 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 412/500
Epoch 00412: val_loss did not improve from 0.11054
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0542 - val_loss: 0.1108 - val_mse: 0.1108 - val_mae: 0.3134 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 413/500
Epoch 00413: val_loss improved from 0.11054 to 0.11041, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.1104 - val_mse: 0.1104 - val_mae: 0.3128 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 414/500
Epoch 00414: val_loss improved from 0.11041 to 0.11008, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0547 - val_loss: 0.1101 - val_mse: 0.1101 - val_mae: 0.3123 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 415/500
Epoch 00415: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.1104 - val_mse: 0.1104 - val_mae: 0.3128 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 416/500
Epoch 00416: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.1107 - val_mse: 0.1107 - val_mae: 0.3132 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 417/500
Epoch 00417: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.1111 - val_mse: 0.1111 - val_mae: 0.3138 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 418/500
Epoch 00418: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.1110 - val_mse: 0.1110 - val_mae: 0.3137 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 419/500
Epoch 00419: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3135 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 420/500
Epoch 00420: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.1110 - val_mse: 0.1110 - val_mae: 0.3138 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 421/500
Epoch 00421: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0532 - val_loss: 0.1113 - val_mse: 0.1113 - val_mae: 0.3142 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 422/500
Epoch 00422: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.1119 - val_mse: 0.1119 - val_mae: 0.3150 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 423/500
Epoch 00423: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.1124 - val_mse: 0.1124 - val_mae: 0.3158 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 424/500
Epoch 00424: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.1126 - val_mse: 0.1126 - val_mae: 0.3160 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 425/500
Epoch 00425: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0541 - val_loss: 0.1127 - val_mse: 0.1127 - val_mae: 0.3163 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 426/500
Epoch 00426: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0537 - val_loss: 0.1126 - val_mse: 0.1126 - val_mae: 0.3160 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 427/500
Epoch 00427: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0557 - val_loss: 0.1124 - val_mse: 0.1124 - val_mae: 0.3159 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 428/500
Epoch 00428: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.1129 - val_mse: 0.1129 - val_mae: 0.3166 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 429/500
Epoch 00429: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.1125 - val_mse: 0.1125 - val_mae: 0.3160 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 430/500
Epoch 00430: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.1117 - val_mse: 0.1117 - val_mae: 0.3148 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 431/500
Epoch 00431: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0561 - val_loss: 0.1111 - val_mse: 0.1111 - val_mae: 0.3139 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 432/500
Epoch 00432: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0539 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3136 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 433/500
Epoch 00433: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.1105 - val_mse: 0.1105 - val_mae: 0.3129 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 434/500
Epoch 00434: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.1105 - val_mse: 0.1105 - val_mae: 0.3130 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 435/500
Epoch 00435: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.1112 - val_mse: 0.1112 - val_mae: 0.3140 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 436/500
Epoch 00436: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.1114 - val_mse: 0.1114 - val_mae: 0.3144 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 437/500
Epoch 00437: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0552 - val_loss: 0.1112 - val_mse: 0.1112 - val_mae: 0.3141 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 438/500
Epoch 00438: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.1113 - val_mse: 0.1113 - val_mae: 0.3142 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 439/500
Epoch 00439: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.1118 - val_mse: 0.1118 - val_mae: 0.3150 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 440/500
Epoch 00440: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0545 - val_loss: 0.1120 - val_mse: 0.1120 - val_mae: 0.3153 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 441/500
Epoch 00441: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.1118 - val_mse: 0.1118 - val_mae: 0.3150 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 442/500
Epoch 00442: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0526 - val_loss: 0.1113 - val_mse: 0.1113 - val_mae: 0.3143 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 443/500
Epoch 00443: val_loss did not improve from 0.11008
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.1106 - val_mse: 0.1106 - val_mae: 0.3132 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 444/500
Epoch 00444: val_loss improved from 0.11008 to 0.10981, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0561 - val_loss: 0.1098 - val_mse: 0.1098 - val_mae: 0.3120 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 445/500
Epoch 00445: val_loss improved from 0.10981 to 0.10936, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0522 - val_loss: 0.1094 - val_mse: 0.1094 - val_mae: 0.3113 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 446/500
Epoch 00446: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.1094 - val_mse: 0.1094 - val_mae: 0.3114 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 447/500
Epoch 00447: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0557 - val_loss: 0.1101 - val_mse: 0.1101 - val_mae: 0.3124 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 448/500
Epoch 00448: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0555 - val_loss: 0.1106 - val_mse: 0.1106 - val_mae: 0.3132 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 449/500
Epoch 00449: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0549 - val_loss: 0.1105 - val_mse: 0.1105 - val_mae: 0.3131 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 450/500
Epoch 00450: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.1103 - val_mse: 0.1103 - val_mae: 0.3127 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 451/500
Epoch 00451: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.1098 - val_mse: 0.1098 - val_mae: 0.3119 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 452/500
Epoch 00452: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.1097 - val_mse: 0.1097 - val_mae: 0.3118 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 453/500
Epoch 00453: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0547 - val_loss: 0.1097 - val_mse: 0.1097 - val_mae: 0.3119 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 454/500
Epoch 00454: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0542 - val_loss: 0.1099 - val_mse: 0.1099 - val_mae: 0.3122 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 455/500
Epoch 00455: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.1100 - val_mse: 0.1100 - val_mae: 0.3123 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 456/500
Epoch 00456: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0552 - val_loss: 0.1098 - val_mse: 0.1098 - val_mae: 0.3121 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 457/500
Epoch 00457: val_loss did not improve from 0.10936
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0521 - val_loss: 0.1095 - val_mse: 0.1095 - val_mae: 0.3116 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 458/500
Epoch 00458: val_loss improved from 0.10936 to 0.10890, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0547 - val_loss: 0.1089 - val_mse: 0.1089 - val_mae: 0.3107 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 459/500
Epoch 00459: val_loss improved from 0.10890 to 0.10854, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0574 - val_loss: 0.1085 - val_mse: 0.1085 - val_mae: 0.3101 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 460/500
Epoch 00460: val_loss improved from 0.10854 to 0.10826, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.1083 - val_mse: 0.1083 - val_mae: 0.3097 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 461/500
Epoch 00461: val_loss improved from 0.10826 to 0.10807, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0532 - val_loss: 0.1081 - val_mse: 0.1081 - val_mae: 0.3094 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 462/500
Epoch 00462: val_loss improved from 0.10807 to 0.10740, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.1074 - val_mse: 0.1074 - val_mae: 0.3084 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 463/500
Epoch 00463: val_loss improved from 0.10740 to 0.10729, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.1073 - val_mse: 0.1073 - val_mae: 0.3082 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 464/500
Epoch 00464: val_loss did not improve from 0.10729
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.1075 - val_mse: 0.1075 - val_mae: 0.3084 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 465/500
Epoch 00465: val_loss did not improve from 0.10729
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0561 - val_loss: 0.1074 - val_mse: 0.1074 - val_mae: 0.3084 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 466/500
Epoch 00466: val_loss did not improve from 0.10729
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.1075 - val_mse: 0.1075 - val_mae: 0.3085 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 467/500
Epoch 00467: val_loss did not improve from 0.10729
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.1076 - val_mse: 0.1076 - val_mae: 0.3087 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 468/500
Epoch 00468: val_loss did not improve from 0.10729
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.1073 - val_mse: 0.1073 - val_mae: 0.3082 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 469/500
Epoch 00469: val_loss did not improve from 0.10729
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0533 - val_loss: 0.1073 - val_mse: 0.1073 - val_mae: 0.3082 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 470/500
Epoch 00470: val_loss improved from 0.10729 to 0.10671, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.3073 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 471/500
Epoch 00471: val_loss improved from 0.10671 to 0.10646, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0544 - val_loss: 0.1065 - val_mse: 0.1065 - val_mae: 0.3069 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 472/500
Epoch 00472: val_loss improved from 0.10646 to 0.10628, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0562 - val_loss: 0.1063 - val_mse: 0.1063 - val_mae: 0.3066 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 473/500
Epoch 00473: val_loss improved from 0.10628 to 0.10609, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0557 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.3064 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 474/500
Epoch 00474: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.3073 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 475/500
Epoch 00475: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.1068 - val_mse: 0.1068 - val_mae: 0.3075 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 476/500
Epoch 00476: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.1068 - val_mse: 0.1068 - val_mae: 0.3074 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 477/500
Epoch 00477: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0523 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.3073 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 478/500
Epoch 00478: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0519 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.3073 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 479/500
Epoch 00479: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.1066 - val_mse: 0.1066 - val_mae: 0.3071 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 480/500
Epoch 00480: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0558 - val_loss: 0.1066 - val_mse: 0.1066 - val_mae: 0.3072 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 481/500
Epoch 00481: val_loss did not improve from 0.10609
10/10 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.3073 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 482/500
Epoch 00482: val_loss improved from 0.10609 to 0.10605, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0547 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.3063 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 483/500
Epoch 00483: val_loss improved from 0.10605 to 0.10552, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.1055 - val_mse: 0.1055 - val_mae: 0.3055 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 484/500
Epoch 00484: val_loss improved from 0.10552 to 0.10517, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0514 - val_loss: 0.1052 - val_mse: 0.1052 - val_mae: 0.3050 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 485/500
Epoch 00485: val_loss did not improve from 0.10517
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.1054 - val_mse: 0.1054 - val_mae: 0.3054 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 486/500
Epoch 00486: val_loss did not improve from 0.10517
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0551 - val_loss: 0.1056 - val_mse: 0.1056 - val_mae: 0.3057 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 487/500
Epoch 00487: val_loss did not improve from 0.10517
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0520 - val_loss: 0.1060 - val_mse: 0.1060 - val_mae: 0.3063 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 488/500
Epoch 00488: val_loss did not improve from 0.10517
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.1058 - val_mse: 0.1058 - val_mae: 0.3060 - lr: 1.0000e-05 - 54ms/epoch - 5ms/step
Epoch 489/500
Epoch 00489: val_loss did not improve from 0.10517
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.1053 - val_mse: 0.1053 - val_mae: 0.3052 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 490/500
Epoch 00490: val_loss did not improve from 0.10517
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0532 - val_loss: 0.1052 - val_mse: 0.1052 - val_mae: 0.3050 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 491/500
Epoch 00491: val_loss improved from 0.10517 to 0.10512, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.1051 - val_mse: 0.1051 - val_mae: 0.3049 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 492/500
Epoch 00492: val_loss did not improve from 0.10512
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.1052 - val_mse: 0.1052 - val_mae: 0.3049 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 493/500
Epoch 00493: val_loss improved from 0.10512 to 0.10483, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.1048 - val_mse: 0.1048 - val_mae: 0.3044 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 494/500
Epoch 00494: val_loss improved from 0.10483 to 0.10409, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0535 - val_loss: 0.1041 - val_mse: 0.1041 - val_mae: 0.3033 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 495/500
Epoch 00495: val_loss improved from 0.10409 to 0.10342, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0530 - val_loss: 0.1034 - val_mse: 0.1034 - val_mae: 0.3022 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 496/500
Epoch 00496: val_loss improved from 0.10342 to 0.10309, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.1031 - val_mse: 0.1031 - val_mae: 0.3017 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 497/500
Epoch 00497: val_loss improved from 0.10309 to 0.10282, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.1028 - val_mse: 0.1028 - val_mae: 0.3013 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 498/500
Epoch 00498: val_loss improved from 0.10282 to 0.10252, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0548 - val_loss: 0.1025 - val_mse: 0.1025 - val_mae: 0.3008 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 499/500
Epoch 00499: val_loss improved from 0.10252 to 0.10212, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.1021 - val_mse: 0.1021 - val_mae: 0.3002 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 500/500
Epoch 00500: val_loss improved from 0.10212 to 0.10209, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.1021 - val_mse: 0.1021 - val_mae: 0.3002 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 35.056668726825066
RMSE: 5.920867227596399
MAPE: 4.704877912816018
WMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 44.87192646385527
RMSE: 6.698651092858566
MAPE: 5.33068935026581
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 53.079656203261706
RMSE: 7.285578645739933
MAPE: 5.726487515550782
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17059.325, Time=4.32 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.28 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16133.019, Time=5.90 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=5.66 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16091.980, Time=7.54 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16009.844, Time=12.16 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-15757.180, Time=8.36 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17029.439, Time=4.44 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17000.917, Time=3.16 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=45.027, Time=4.20 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 60.041 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8554.662
Date: Sun, 12 Dec 2021 AIC -17059.325
Time: 19:21:45 BIC -16942.054
Sample: 0 HQIC -17014.288
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.409e-10 5.52e-21 -2.55e+10 0.000 -1.41e-10 -1.41e-10
x2 -1.378e-10 5.47e-21 -2.52e+10 0.000 -1.38e-10 -1.38e-10
x3 -1.323e-10 5.35e-21 -2.47e+10 0.000 -1.32e-10 -1.32e-10
x4 1.0000 5.41e-21 1.85e+20 0.000 1.000 1.000
x5 -1.221e-10 5.15e-21 -2.37e+10 0.000 -1.22e-10 -1.22e-10
x6 -8.465e-10 1.3e-20 -6.53e+10 0.000 -8.47e-10 -8.47e-10
x7 -1.3e-10 5.32e-21 -2.44e+10 0.000 -1.3e-10 -1.3e-10
x8 -1.267e-10 5.27e-21 -2.41e+10 0.000 -1.27e-10 -1.27e-10
x9 -2.032e-11 6.67e-22 -3.05e+10 0.000 -2.03e-11 -2.03e-11
x10 -5.319e-11 2.3e-21 -2.31e+10 0.000 -5.32e-11 -5.32e-11
x11 -1.275e-10 5.28e-21 -2.42e+10 0.000 -1.28e-10 -1.28e-10
x12 -1.262e-10 5.23e-21 -2.41e+10 0.000 -1.26e-10 -1.26e-10
x13 -1.339e-10 5.39e-21 -2.49e+10 0.000 -1.34e-10 -1.34e-10
x14 -1.092e-09 1.55e-20 -7.06e+10 0.000 -1.09e-09 -1.09e-09
x15 -1.342e-10 5.42e-21 -2.48e+10 0.000 -1.34e-10 -1.34e-10
x16 -2.01e-10 6.63e-21 -3.03e+10 0.000 -2.01e-10 -2.01e-10
x17 -1.144e-10 5.01e-21 -2.29e+10 0.000 -1.14e-10 -1.14e-10
x18 -9.245e-11 4.49e-21 -2.06e+10 0.000 -9.24e-11 -9.24e-11
x19 -1.646e-10 6.01e-21 -2.74e+10 0.000 -1.65e-10 -1.65e-10
x20 -2.482e-10 7.35e-21 -3.37e+10 0.000 -2.48e-10 -2.48e-10
x21 -3.385e-12 3.14e-24 -1.08e+12 0.000 -3.39e-12 -3.39e-12
x22 -8.066e-11 2.47e-23 -3.26e+12 0.000 -8.07e-11 -8.07e-11
ar.L1 -0.2877 2.48e-22 -1.16e+21 0.000 -0.288 -0.288
ma.L1 -0.9134 1.05e-21 -8.7e+20 0.000 -0.913 -0.913
sigma2 9.332e-11 6.96e-11 1.340 0.180 -4.32e-11 2.3e-10
===================================================================================
Ljung-Box (L1) (Q): 84.37 Jarque-Bera (JB): 4308764.36
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.22
Prob(H) (two-sided): 0.00 Kurtosis: 361.26
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.32e+42. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.94953, saving model to LSTM7.h5
45/45 - 3s - loss: 0.3493 - mse: 0.3493 - mae: 0.4597 - val_loss: 0.9495 - val_mse: 0.9495 - val_mae: 0.9524 - lr: 0.0010 - 3s/epoch - 58ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.94953 to 0.62787, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0451 - mse: 0.0451 - mae: 0.1715 - val_loss: 0.6279 - val_mse: 0.6279 - val_mae: 0.7695 - lr: 0.0010 - 189ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.62787 to 0.46000, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0266 - mse: 0.0266 - mae: 0.1299 - val_loss: 0.4600 - val_mse: 0.4600 - val_mae: 0.6537 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.46000 to 0.37808, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0187 - mse: 0.0187 - mae: 0.1073 - val_loss: 0.3781 - val_mse: 0.3781 - val_mae: 0.5890 - lr: 0.0010 - 188ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.37808 to 0.35518, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.1004 - val_loss: 0.3552 - val_mse: 0.3552 - val_mae: 0.5700 - lr: 0.0010 - 183ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.35518 to 0.33869, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0935 - val_loss: 0.3387 - val_mse: 0.3387 - val_mae: 0.5562 - lr: 0.0010 - 181ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.33869 to 0.32261, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0907 - val_loss: 0.3226 - val_mse: 0.3226 - val_mae: 0.5424 - lr: 0.0010 - 175ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.32261
45/45 - 0s - loss: 0.0133 - mse: 0.0133 - mae: 0.0919 - val_loss: 0.3284 - val_mse: 0.3284 - val_mae: 0.5479 - lr: 0.0010 - 162ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.32261 to 0.29843, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0848 - val_loss: 0.2984 - val_mse: 0.2984 - val_mae: 0.5212 - lr: 0.0010 - 181ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.29843 to 0.27395, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0854 - val_loss: 0.2740 - val_mse: 0.2740 - val_mae: 0.4989 - lr: 0.0010 - 174ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.27395 to 0.26507, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0796 - val_loss: 0.2651 - val_mse: 0.2651 - val_mae: 0.4904 - lr: 0.0010 - 190ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.26507 to 0.26086, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0821 - val_loss: 0.2609 - val_mse: 0.2609 - val_mae: 0.4867 - lr: 0.0010 - 200ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.26086
45/45 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0820 - val_loss: 0.2672 - val_mse: 0.2672 - val_mae: 0.4934 - lr: 0.0010 - 175ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.26086 to 0.25533, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0824 - val_loss: 0.2553 - val_mse: 0.2553 - val_mae: 0.4820 - lr: 0.0010 - 187ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.25533
45/45 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0831 - val_loss: 0.2588 - val_mse: 0.2588 - val_mae: 0.4857 - lr: 0.0010 - 167ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.25533 to 0.23311, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0848 - val_loss: 0.2331 - val_mse: 0.2331 - val_mae: 0.4599 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.23311 to 0.20135, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0806 - val_loss: 0.2014 - val_mse: 0.2014 - val_mae: 0.4262 - lr: 0.0010 - 201ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.20135
45/45 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0786 - val_loss: 0.2293 - val_mse: 0.2293 - val_mae: 0.4568 - lr: 0.0010 - 170ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.20135
45/45 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0843 - val_loss: 0.2187 - val_mse: 0.2187 - val_mae: 0.4459 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.20135 to 0.19748, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0784 - val_loss: 0.1975 - val_mse: 0.1975 - val_mae: 0.4228 - lr: 0.0010 - 176ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.19748
45/45 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0798 - val_loss: 0.2031 - val_mse: 0.2031 - val_mae: 0.4290 - lr: 0.0010 - 164ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.19748 to 0.18913, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0832 - val_loss: 0.1891 - val_mse: 0.1891 - val_mae: 0.4133 - lr: 0.0010 - 184ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.18913
45/45 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0834 - val_loss: 0.1922 - val_mse: 0.1922 - val_mae: 0.4168 - lr: 0.0010 - 183ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.18913
45/45 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0880 - val_loss: 0.1994 - val_mse: 0.1994 - val_mae: 0.4249 - lr: 0.0010 - 170ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss improved from 0.18913 to 0.17686, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0862 - val_loss: 0.1769 - val_mse: 0.1769 - val_mae: 0.3988 - lr: 0.0010 - 177ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.17686
45/45 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0837 - val_loss: 0.1974 - val_mse: 0.1974 - val_mae: 0.4225 - lr: 0.0010 - 171ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.17686
45/45 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0861 - val_loss: 0.1792 - val_mse: 0.1792 - val_mae: 0.4008 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.17686 to 0.15253, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0824 - val_loss: 0.1525 - val_mse: 0.1525 - val_mae: 0.3675 - lr: 0.0010 - 192ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.15253
45/45 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0840 - val_loss: 0.1632 - val_mse: 0.1632 - val_mae: 0.3811 - lr: 0.0010 - 178ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.15253
45/45 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0833 - val_loss: 0.1910 - val_mse: 0.1910 - val_mae: 0.4138 - lr: 0.0010 - 164ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.15253
45/45 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0838 - val_loss: 0.1712 - val_mse: 0.1712 - val_mae: 0.3887 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.15253
45/45 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0781 - val_loss: 0.1718 - val_mse: 0.1718 - val_mae: 0.3894 - lr: 0.0010 - 170ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00033: val_loss did not improve from 0.15253
45/45 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0757 - val_loss: 0.1762 - val_mse: 0.1762 - val_mae: 0.3939 - lr: 0.0010 - 181ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss improved from 0.15253 to 0.14316, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.1010 - val_loss: 0.1432 - val_mse: 0.1432 - val_mae: 0.3543 - lr: 1.0000e-04 - 207ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss improved from 0.14316 to 0.13220, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0585 - val_loss: 0.1322 - val_mse: 0.1322 - val_mae: 0.3398 - lr: 1.0000e-04 - 180ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss improved from 0.13220 to 0.12947, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0584 - val_loss: 0.1295 - val_mse: 0.1295 - val_mae: 0.3359 - lr: 1.0000e-04 - 190ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss improved from 0.12947 to 0.12788, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.1279 - val_mse: 0.1279 - val_mae: 0.3335 - lr: 1.0000e-04 - 189ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss improved from 0.12788 to 0.12680, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.1268 - val_mse: 0.1268 - val_mae: 0.3318 - lr: 1.0000e-04 - 199ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss improved from 0.12680 to 0.12636, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.1264 - val_mse: 0.1264 - val_mae: 0.3309 - lr: 1.0000e-04 - 207ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0563 - val_loss: 0.1267 - val_mse: 0.1267 - val_mae: 0.3311 - lr: 1.0000e-04 - 196ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.1272 - val_mse: 0.1272 - val_mae: 0.3317 - lr: 1.0000e-04 - 181ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.1292 - val_mse: 0.1292 - val_mae: 0.3343 - lr: 1.0000e-04 - 199ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0523 - val_loss: 0.1302 - val_mse: 0.1302 - val_mae: 0.3356 - lr: 1.0000e-04 - 189ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00044: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0536 - val_loss: 0.1289 - val_mse: 0.1289 - val_mae: 0.3334 - lr: 1.0000e-04 - 187ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.1286 - val_mse: 0.1286 - val_mae: 0.3330 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0526 - val_loss: 0.1282 - val_mse: 0.1282 - val_mae: 0.3324 - lr: 1.0000e-05 - 163ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.1281 - val_mse: 0.1281 - val_mae: 0.3324 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.1280 - val_mse: 0.1280 - val_mae: 0.3322 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00049: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0502 - val_loss: 0.1281 - val_mse: 0.1281 - val_mae: 0.3324 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0529 - val_loss: 0.1283 - val_mse: 0.1283 - val_mae: 0.3325 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.1280 - val_mse: 0.1280 - val_mae: 0.3321 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0531 - val_loss: 0.1277 - val_mse: 0.1277 - val_mae: 0.3317 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.1280 - val_mse: 0.1280 - val_mae: 0.3321 - lr: 1.0000e-05 - 165ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0522 - val_loss: 0.1277 - val_mse: 0.1277 - val_mae: 0.3317 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.1279 - val_mse: 0.1279 - val_mae: 0.3320 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.1281 - val_mse: 0.1281 - val_mae: 0.3323 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.1286 - val_mse: 0.1286 - val_mae: 0.3329 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0539 - val_loss: 0.1282 - val_mse: 0.1282 - val_mae: 0.3324 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0509 - val_loss: 0.1283 - val_mse: 0.1283 - val_mae: 0.3325 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.1289 - val_mse: 0.1289 - val_mae: 0.3333 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.1291 - val_mse: 0.1291 - val_mae: 0.3337 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0518 - val_loss: 0.1292 - val_mse: 0.1292 - val_mae: 0.3337 - lr: 1.0000e-05 - 163ms/epoch - 4ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.1290 - val_mse: 0.1290 - val_mae: 0.3334 - lr: 1.0000e-05 - 163ms/epoch - 4ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0516 - val_loss: 0.1291 - val_mse: 0.1291 - val_mae: 0.3335 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.1291 - val_mse: 0.1291 - val_mae: 0.3335 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0509 - val_loss: 0.1291 - val_mse: 0.1291 - val_mae: 0.3336 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.1296 - val_mse: 0.1296 - val_mae: 0.3341 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.1294 - val_mse: 0.1294 - val_mae: 0.3338 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.1291 - val_mse: 0.1291 - val_mae: 0.3335 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.1292 - val_mse: 0.1292 - val_mae: 0.3336 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0501 - val_loss: 0.1291 - val_mse: 0.1291 - val_mae: 0.3335 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0505 - val_loss: 0.1294 - val_mse: 0.1294 - val_mae: 0.3338 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.1293 - val_mse: 0.1293 - val_mae: 0.3337 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.1295 - val_mse: 0.1295 - val_mae: 0.3339 - lr: 1.0000e-05 - 164ms/epoch - 4ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0493 - val_loss: 0.1296 - val_mse: 0.1296 - val_mae: 0.3341 - lr: 1.0000e-05 - 162ms/epoch - 4ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.1294 - val_mse: 0.1294 - val_mae: 0.3339 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0521 - val_loss: 0.1292 - val_mse: 0.1292 - val_mae: 0.3335 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0504 - val_loss: 0.1284 - val_mse: 0.1284 - val_mae: 0.3324 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.1282 - val_mse: 0.1282 - val_mae: 0.3320 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.1281 - val_mse: 0.1281 - val_mae: 0.3318 - lr: 1.0000e-05 - 165ms/epoch - 4ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0491 - val_loss: 0.1279 - val_mse: 0.1279 - val_mae: 0.3316 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0511 - val_loss: 0.1279 - val_mse: 0.1279 - val_mae: 0.3315 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0487 - val_loss: 0.1282 - val_mse: 0.1282 - val_mae: 0.3319 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0521 - val_loss: 0.1286 - val_mse: 0.1286 - val_mae: 0.3325 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 85/500
Epoch 00085: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0501 - val_loss: 0.1293 - val_mse: 0.1293 - val_mae: 0.3334 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.1293 - val_mse: 0.1293 - val_mae: 0.3333 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.1295 - val_mse: 0.1295 - val_mae: 0.3337 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 88/500
Epoch 00088: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.1290 - val_mse: 0.1290 - val_mae: 0.3329 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 89/500
Epoch 00089: val_loss did not improve from 0.12636
45/45 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0518 - val_loss: 0.1289 - val_mse: 0.1289 - val_mae: 0.3328 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 00089: early stopping
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 35.056668726825066
RMSE: 5.920867227596399
MAPE: 4.704877912816018
WMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 44.87192646385527
RMSE: 6.698651092858566
MAPE: 5.33068935026581
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 53.079656203261706
RMSE: 7.285578645739933
MAPE: 5.726487515550782
KAMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 30.678794294842323
RMSE: 5.5388441298561855
MAPE: 4.336649130448084
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.733, Time=2.47 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.592, Time=4.30 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15587.551, Time=8.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.592, Time=5.82 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16365.334, Time=9.78 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16163.760, Time=13.81 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16245.181, Time=15.18 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.017, Time=5.16 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17106.133, Time=5.92 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.425, Time=6.90 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=-17000.553, Time=3.53 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 80.941 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood 8579.066
Date: Sun, 12 Dec 2021 AIC -17106.133
Time: 19:26:22 BIC -16984.171
Sample: 0 HQIC -17059.294
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -3.048e-10 1.69e-20 -1.8e+10 0.000 -3.05e-10 -3.05e-10
x2 -3.042e-10 1.75e-20 -1.74e+10 0.000 -3.04e-10 -3.04e-10
x3 -3.108e-10 1.62e-20 -1.92e+10 0.000 -3.11e-10 -3.11e-10
x4 1.0000 1.69e-20 5.91e+19 0.000 1.000 1.000
x5 -2.767e-10 1.61e-20 -1.72e+10 0.000 -2.77e-10 -2.77e-10
x6 -6.072e-09 1.38e-19 -4.42e+10 0.000 -6.07e-09 -6.07e-09
x7 -2.8e-10 1.62e-20 -1.73e+10 0.000 -2.8e-10 -2.8e-10
x8 -2.792e-10 1.65e-20 -1.69e+10 0.000 -2.79e-10 -2.79e-10
x9 -1.502e-10 1.02e-21 -1.48e+11 0.000 -1.5e-10 -1.5e-10
x10 -2.482e-10 4.3e-21 -5.77e+10 0.000 -2.48e-10 -2.48e-10
x11 -2.764e-10 1.64e-20 -1.69e+10 0.000 -2.76e-10 -2.76e-10
x12 -2.857e-10 1.64e-20 -1.74e+10 0.000 -2.86e-10 -2.86e-10
x13 -2.944e-10 1.66e-20 -1.77e+10 0.000 -2.94e-10 -2.94e-10
x14 -2.403e-09 4.86e-20 -4.95e+10 0.000 -2.4e-09 -2.4e-09
x15 -3.368e-10 1.81e-20 -1.86e+10 0.000 -3.37e-10 -3.37e-10
x16 -2.169e-10 1.45e-20 -1.49e+10 0.000 -2.17e-10 -2.17e-10
x17 -2.124e-10 1.44e-20 -1.47e+10 0.000 -2.12e-10 -2.12e-10
x18 -9.125e-10 2.98e-20 -3.06e+10 0.000 -9.13e-10 -9.13e-10
x19 -3.698e-10 1.9e-20 -1.95e+10 0.000 -3.7e-10 -3.7e-10
x20 -8.9e-10 2.94e-20 -3.03e+10 0.000 -8.9e-10 -8.9e-10
x21 -1.844e-11 1.86e-22 -9.9e+10 0.000 -1.84e-11 -1.84e-11
x22 -2.169e-10 5.04e-22 -4.3e+11 0.000 -2.17e-10 -2.17e-10
ar.L1 -1.2011 7.4e-23 -1.62e+22 0.000 -1.201 -1.201
ar.L2 -0.9017 1.51e-22 -5.98e+21 0.000 -0.902 -0.902
ar.L3 -0.4014 9.48e-23 -4.23e+21 0.000 -0.401 -0.401
sigma2 8.782e-11 6.95e-11 1.264 0.206 -4.84e-11 2.24e-10
===================================================================================
Ljung-Box (L1) (Q): 3.61 Jarque-Bera (JB): 16191.93
Prob(Q): 0.06 Prob(JB): 0.00
Heteroskedasticity (H): 0.35 Skew: 0.59
Prob(H) (two-sided): 0.00 Kurtosis: 24.94
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.23e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.06775, saving model to LSTM7.h5
58/58 - 2s - loss: 0.0417 - mse: 0.0417 - mae: 0.1613 - val_loss: 0.0677 - val_mse: 0.0677 - val_mae: 0.2084 - lr: 0.0010 - 2s/epoch - 39ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.06775
58/58 - 0s - loss: 0.0227 - mse: 0.0227 - mae: 0.1232 - val_loss: 0.1669 - val_mse: 0.1669 - val_mae: 0.3564 - lr: 0.0010 - 222ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.06775
58/58 - 0s - loss: 0.0190 - mse: 0.0190 - mae: 0.1036 - val_loss: 0.0788 - val_mse: 0.0788 - val_mae: 0.2255 - lr: 0.0010 - 213ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.06775
58/58 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0705 - val_loss: 0.1947 - val_mse: 0.1947 - val_mae: 0.3957 - lr: 0.0010 - 217ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.06775 to 0.03073, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0863 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1250 - lr: 0.0010 - 232ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.03073
58/58 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0789 - val_loss: 0.3547 - val_mse: 0.3547 - val_mae: 0.5564 - lr: 0.0010 - 209ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.03073 to 0.02182, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0246 - mse: 0.0246 - mae: 0.1162 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1275 - lr: 0.0010 - 238ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.02182
58/58 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0709 - val_loss: 0.3146 - val_mse: 0.3146 - val_mae: 0.5219 - lr: 0.0010 - 218ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.02182 to 0.02122, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0206 - mse: 0.0206 - mae: 0.1064 - val_loss: 0.0212 - val_mse: 0.0212 - val_mae: 0.1029 - lr: 0.0010 - 221ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.2012 - val_mse: 0.2012 - val_mae: 0.4063 - lr: 0.0010 - 202ms/epoch - 3ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0854 - val_loss: 0.0673 - val_mse: 0.0673 - val_mae: 0.2084 - lr: 0.0010 - 209ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0642 - val_loss: 0.1385 - val_mse: 0.1385 - val_mae: 0.3263 - lr: 0.0010 - 218ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0784 - val_loss: 0.1140 - val_mse: 0.1140 - val_mae: 0.2914 - lr: 0.0010 - 215ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00014: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0709 - val_loss: 0.1579 - val_mse: 0.1579 - val_mae: 0.3520 - lr: 0.0010 - 211ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0261 - mse: 0.0261 - mae: 0.1345 - val_loss: 0.0985 - val_mse: 0.0985 - val_mae: 0.2703 - lr: 1.0000e-04 - 209ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0824 - val_loss: 0.0880 - val_mse: 0.0880 - val_mae: 0.2526 - lr: 1.0000e-04 - 208ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0749 - val_loss: 0.0855 - val_mse: 0.0855 - val_mae: 0.2473 - lr: 1.0000e-04 - 220ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0700 - val_loss: 0.0830 - val_mse: 0.0830 - val_mae: 0.2418 - lr: 1.0000e-04 - 209ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00019: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0678 - val_loss: 0.0853 - val_mse: 0.0853 - val_mae: 0.2453 - lr: 1.0000e-04 - 218ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0631 - val_loss: 0.0850 - val_mse: 0.0850 - val_mae: 0.2448 - lr: 1.0000e-05 - 208ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0602 - val_loss: 0.0849 - val_mse: 0.0849 - val_mae: 0.2445 - lr: 1.0000e-05 - 222ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0636 - val_loss: 0.0847 - val_mse: 0.0847 - val_mae: 0.2443 - lr: 1.0000e-05 - 224ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0844 - val_mse: 0.0844 - val_mae: 0.2436 - lr: 1.0000e-05 - 208ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00024: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0595 - val_loss: 0.0842 - val_mse: 0.0842 - val_mae: 0.2433 - lr: 1.0000e-05 - 217ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0625 - val_loss: 0.0843 - val_mse: 0.0843 - val_mae: 0.2434 - lr: 1.0000e-05 - 210ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0639 - val_loss: 0.0839 - val_mse: 0.0839 - val_mae: 0.2427 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0605 - val_loss: 0.0838 - val_mse: 0.0838 - val_mae: 0.2424 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0630 - val_loss: 0.0835 - val_mse: 0.0835 - val_mae: 0.2418 - lr: 1.0000e-05 - 208ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0606 - val_loss: 0.0837 - val_mse: 0.0837 - val_mae: 0.2421 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0619 - val_loss: 0.0838 - val_mse: 0.0838 - val_mae: 0.2422 - lr: 1.0000e-05 - 220ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0593 - val_loss: 0.0843 - val_mse: 0.0843 - val_mae: 0.2430 - lr: 1.0000e-05 - 217ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0844 - val_mse: 0.0844 - val_mae: 0.2431 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0616 - val_loss: 0.0846 - val_mse: 0.0846 - val_mae: 0.2434 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0599 - val_loss: 0.0850 - val_mse: 0.0850 - val_mae: 0.2442 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.0853 - val_mse: 0.0853 - val_mae: 0.2445 - lr: 1.0000e-05 - 205ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0586 - val_loss: 0.0852 - val_mse: 0.0852 - val_mae: 0.2442 - lr: 1.0000e-05 - 219ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0603 - val_loss: 0.0853 - val_mse: 0.0853 - val_mae: 0.2443 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0621 - val_loss: 0.0853 - val_mse: 0.0853 - val_mae: 0.2442 - lr: 1.0000e-05 - 208ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0593 - val_loss: 0.0855 - val_mse: 0.0855 - val_mae: 0.2445 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0585 - val_loss: 0.0854 - val_mse: 0.0854 - val_mae: 0.2441 - lr: 1.0000e-05 - 219ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0583 - val_loss: 0.0852 - val_mse: 0.0852 - val_mae: 0.2436 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0602 - val_loss: 0.0856 - val_mse: 0.0856 - val_mae: 0.2444 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.0860 - val_mse: 0.0860 - val_mae: 0.2450 - lr: 1.0000e-05 - 217ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0589 - val_loss: 0.0863 - val_mse: 0.0863 - val_mae: 0.2453 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.0865 - val_mse: 0.0865 - val_mae: 0.2456 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0583 - val_loss: 0.0873 - val_mse: 0.0873 - val_mae: 0.2470 - lr: 1.0000e-05 - 206ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0600 - val_loss: 0.0879 - val_mse: 0.0879 - val_mae: 0.2479 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0600 - val_loss: 0.0883 - val_mse: 0.0883 - val_mae: 0.2485 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.0889 - val_mse: 0.0889 - val_mae: 0.2495 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0564 - val_loss: 0.0892 - val_mse: 0.0892 - val_mae: 0.2499 - lr: 1.0000e-05 - 218ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0567 - val_loss: 0.0899 - val_mse: 0.0899 - val_mae: 0.2510 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0581 - val_loss: 0.0899 - val_mse: 0.0899 - val_mae: 0.2510 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0560 - val_loss: 0.0901 - val_mse: 0.0901 - val_mae: 0.2511 - lr: 1.0000e-05 - 218ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.0903 - val_mse: 0.0903 - val_mae: 0.2515 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0593 - val_loss: 0.0903 - val_mse: 0.0903 - val_mae: 0.2513 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0561 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2510 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2510 - lr: 1.0000e-05 - 208ms/epoch - 4ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0576 - val_loss: 0.0911 - val_mse: 0.0911 - val_mae: 0.2524 - lr: 1.0000e-05 - 210ms/epoch - 4ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.02122
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0577 - val_loss: 0.0920 - val_mse: 0.0920 - val_mae: 0.2538 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 00059: early stopping
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 35.056668726825066
RMSE: 5.920867227596399
MAPE: 4.704877912816018
WMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 44.87192646385527
RMSE: 6.698651092858566
MAPE: 5.33068935026581
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 53.079656203261706
RMSE: 7.285578645739933
MAPE: 5.726487515550782
KAMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 30.678794294842323
RMSE: 5.5388441298561855
MAPE: 4.336649130448084
MIDPOINT
Prediction vs Close: 47.39% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 19.38951232132957
RMSE: 4.4033523957695655
MAPE: 3.5042510250586574
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16954.347, Time=2.59 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14725.736, Time=2.38 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16732.390, Time=8.17 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15913.358, Time=7.22 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16550.077, Time=10.03 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15004.835, Time=9.43 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16027.273, Time=10.03 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16934.995, Time=2.56 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16924.758, Time=3.21 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-16952.347, Time=2.17 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 57.809 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8502.173
Date: Sun, 12 Dec 2021 AIC -16954.347
Time: 19:29:15 BIC -16837.076
Sample: 0 HQIC -16909.310
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 3.409e-14 2.62e-06 1.3e-08 1.000 -5.13e-06 5.13e-06
x2 1.816e-14 2.62e-06 6.93e-09 1.000 -5.13e-06 5.13e-06
x3 -2.039e-15 2.47e-06 -8.26e-10 1.000 -4.84e-06 4.84e-06
x4 1.0000 2.5e-06 4e+05 0.000 1.000 1.000
x5 2.488e-12 2.48e-06 1e-06 1.000 -4.86e-06 4.86e-06
x6 2.84e-15 6.48e-06 4.38e-10 1.000 -1.27e-05 1.27e-05
x7 3.618e-13 3.24e-06 1.12e-07 1.000 -6.36e-06 6.36e-06
x8 -0.0002 4.44e-06 -43.079 0.000 -0.000 -0.000
x9 2.93e-14 6.3e-08 4.65e-07 1.000 -1.23e-07 1.23e-07
x10 -2.843e-05 9.63e-06 -2.951 0.003 -4.73e-05 -9.55e-06
x11 0.0002 3.28e-06 53.981 0.000 0.000 0.000
x12 0.0001 5.63e-06 23.078 0.000 0.000 0.000
x13 -2.595e-14 2.63e-06 -9.88e-09 1.000 -5.15e-06 5.15e-06
x14 -6.497e-14 5.76e-06 -1.13e-08 1.000 -1.13e-05 1.13e-05
x15 1.699e-12 3.08e-06 5.51e-07 1.000 -6.04e-06 6.04e-06
x16 -3.969e-12 4.77e-06 -8.33e-07 1.000 -9.34e-06 9.34e-06
x17 5.452e-12 8.58e-07 6.35e-06 1.000 -1.68e-06 1.68e-06
x18 -3.68e-13 1.33e-05 -2.76e-08 1.000 -2.61e-05 2.61e-05
x19 -5.643e-13 4.61e-06 -1.22e-07 1.000 -9.03e-06 9.03e-06
x20 6.651e-14 4.9e-05 1.36e-09 1.000 -9.61e-05 9.61e-05
x21 -1.76e-16 8.47e-11 -2.08e-06 1.000 -1.66e-10 1.66e-10
x22 -7.82e-16 1.75e-10 -4.47e-06 1.000 -3.43e-10 3.43e-10
ar.L1 -0.2858 5.46e-08 -5.24e+06 0.000 -0.286 -0.286
ma.L1 -0.9143 5.59e-08 -1.63e+07 0.000 -0.914 -0.914
sigma2 1e-10 6.99e-11 1.430 0.153 -3.71e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 84.00 Jarque-Bera (JB): 4822228.07
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -6.05
Prob(H) (two-sided): 0.00 Kurtosis: 381.97
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.54e+27. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.36626, saving model to LSTM7.h5
43/43 - 2s - loss: 0.3061 - mse: 0.3061 - mae: 0.3860 - val_loss: 0.3663 - val_mse: 0.3663 - val_mae: 0.5814 - lr: 0.0010 - 2s/epoch - 54ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.36626 to 0.16105, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0369 - mse: 0.0369 - mae: 0.1521 - val_loss: 0.1611 - val_mse: 0.1611 - val_mae: 0.3788 - lr: 0.0010 - 170ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.16105 to 0.09031, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0220 - mse: 0.0220 - mae: 0.1203 - val_loss: 0.0903 - val_mse: 0.0903 - val_mae: 0.2761 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.09031 to 0.07747, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0176 - mse: 0.0176 - mae: 0.1051 - val_loss: 0.0775 - val_mse: 0.0775 - val_mae: 0.2534 - lr: 0.0010 - 195ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.07747
43/43 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.1023 - val_loss: 0.0849 - val_mse: 0.0849 - val_mae: 0.2668 - lr: 0.0010 - 190ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.07747 to 0.07631, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0138 - mse: 0.0138 - mae: 0.0938 - val_loss: 0.0763 - val_mse: 0.0763 - val_mae: 0.2512 - lr: 0.0010 - 197ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.07631
43/43 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0902 - val_loss: 0.0773 - val_mse: 0.0773 - val_mae: 0.2534 - lr: 0.0010 - 168ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.07631 to 0.07038, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0985 - val_loss: 0.0704 - val_mse: 0.0704 - val_mae: 0.2405 - lr: 0.0010 - 170ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.07038
43/43 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0962 - val_loss: 0.0727 - val_mse: 0.0727 - val_mae: 0.2453 - lr: 0.0010 - 171ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.07038
43/43 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0972 - val_loss: 0.0707 - val_mse: 0.0707 - val_mae: 0.2414 - lr: 0.0010 - 186ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.07038 to 0.06790, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0952 - val_loss: 0.0679 - val_mse: 0.0679 - val_mae: 0.2355 - lr: 0.0010 - 192ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.06790 to 0.06263, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0866 - val_loss: 0.0626 - val_mse: 0.0626 - val_mae: 0.2248 - lr: 0.0010 - 189ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.06263 to 0.04913, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0907 - val_loss: 0.0491 - val_mse: 0.0491 - val_mae: 0.1955 - lr: 0.0010 - 190ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.04913 to 0.04302, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0913 - val_loss: 0.0430 - val_mse: 0.0430 - val_mae: 0.1808 - lr: 0.0010 - 175ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04302
43/43 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0925 - val_loss: 0.0585 - val_mse: 0.0585 - val_mae: 0.2157 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.04302
43/43 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0908 - val_loss: 0.0491 - val_mse: 0.0491 - val_mae: 0.1950 - lr: 0.0010 - 187ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04302
43/43 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0870 - val_loss: 0.0518 - val_mse: 0.0518 - val_mae: 0.2009 - lr: 0.0010 - 178ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04302
43/43 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0847 - val_loss: 0.0529 - val_mse: 0.0529 - val_mae: 0.2033 - lr: 0.0010 - 170ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00019: val_loss did not improve from 0.04302
43/43 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0795 - val_loss: 0.0513 - val_mse: 0.0513 - val_mae: 0.1997 - lr: 0.0010 - 160ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.04302 to 0.04016, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0944 - val_loss: 0.0402 - val_mse: 0.0402 - val_mae: 0.1725 - lr: 1.0000e-04 - 194ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.04016 to 0.03663, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0637 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1631 - lr: 1.0000e-04 - 200ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.03663 to 0.03522, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1593 - lr: 1.0000e-04 - 195ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1604 - lr: 1.0000e-04 - 163ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1625 - lr: 1.0000e-04 - 160ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0579 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1642 - lr: 1.0000e-04 - 167ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0611 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1647 - lr: 1.0000e-04 - 183ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00027: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1620 - lr: 1.0000e-04 - 184ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1620 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1620 - lr: 1.0000e-05 - 170ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1619 - lr: 1.0000e-05 - 160ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0581 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1620 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00032: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0558 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1619 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1618 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1620 - lr: 1.0000e-05 - 165ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1622 - lr: 1.0000e-05 - 158ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1626 - lr: 1.0000e-05 - 163ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0557 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1632 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1630 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1629 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0556 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1629 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0526 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1628 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0570 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1627 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0558 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1629 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0545 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1631 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0558 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1630 - lr: 1.0000e-05 - 160ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0574 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1630 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0557 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1633 - lr: 1.0000e-05 - 160ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0553 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1630 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0563 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1634 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1637 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1637 - lr: 1.0000e-05 - 161ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1638 - lr: 1.0000e-05 - 159ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1635 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1634 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0555 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1629 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0549 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1623 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0553 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1624 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1626 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0577 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1624 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0550 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1624 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0555 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1627 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1625 - lr: 1.0000e-05 - 164ms/epoch - 4ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0583 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1620 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0564 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1623 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0558 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1630 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1629 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0564 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1625 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0561 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1628 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1638 - lr: 1.0000e-05 - 155ms/epoch - 4ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1634 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1627 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.03522
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0550 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1622 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 00072: early stopping
SMA
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 52.24% Accuracy
MSE: 23.38002191723926
RMSE: 4.835289227878645
MAPE: 3.8675720673818827
EMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 35.056668726825066
RMSE: 5.920867227596399
MAPE: 4.704877912816018
WMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 44.87192646385527
RMSE: 6.698651092858566
MAPE: 5.33068935026581
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 53.079656203261706
RMSE: 7.285578645739933
MAPE: 5.726487515550782
KAMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 30.678794294842323
RMSE: 5.5388441298561855
MAPE: 4.336649130448084
MIDPOINT
Prediction vs Close: 47.39% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 19.38951232132957
RMSE: 4.4033523957695655
MAPE: 3.5042510250586574
T3
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 90.72292612095576
RMSE: 9.524858325505727
MAPE: 7.398189805001564
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16412.930, Time=11.04 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14867.265, Time=6.57 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15902.803, Time=5.52 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15117.003, Time=8.07 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15669.652, Time=8.03 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-12676.374, Time=9.01 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16418.724, Time=9.01 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15107.772, Time=15.03 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15708.742, Time=16.14 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-13418.641, Time=25.03 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 113.483 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8234.362
Date: Sun, 12 Dec 2021 AIC -16418.724
Time: 19:34:17 BIC -16301.453
Sample: 0 HQIC -16373.687
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.784e-07 0.001 -0.000 1.000 -0.002 0.002
x2 -1.784e-07 0.001 -0.000 1.000 -0.003 0.003
x3 -1.794e-07 0.001 -0.000 1.000 -0.002 0.002
x4 1.0000 0.000 2616.546 0.000 0.999 1.001
x5 -1.704e-07 0.000 -0.000 1.000 -0.001 0.001
x6 -2.858e-07 3.31e-05 -0.009 0.993 -6.52e-05 6.46e-05
x7 -1.754e-07 0.001 -0.000 1.000 -0.002 0.002
x8 0.0007 0.000 3.091 0.002 0.000 0.001
x9 3.313e-08 0.000 9.39e-05 1.000 -0.001 0.001
x10 3.499e-06 0.000 0.022 0.983 -0.000 0.000
x11 -0.0003 0.000 -1.284 0.199 -0.001 0.000
x12 -6.362e-05 0.000 -0.260 0.795 -0.001 0.000
x13 -1.783e-07 0.000 -0.001 0.999 -0.000 0.000
x14 -5.244e-07 0.001 -0.001 0.999 -0.001 0.001
x15 -1.737e-07 0.000 -0.001 0.999 -0.000 0.000
x16 -2.583e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.74e-07 0.000 -0.001 0.999 -0.000 0.000
x18 -5.776e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -1.95e-07 0.000 -0.002 0.999 -0.000 0.000
x20 1.72e-07 0.000 0.001 0.999 -0.000 0.000
x21 -7.548e-10 0.001 -9.93e-07 1.000 -0.001 0.001
x22 -1.194e-08 0.000 -8.47e-05 1.000 -0.000 0.000
ma.L1 -1.3862 1.58e-05 -8.78e+04 0.000 -1.386 -1.386
ma.L2 0.4019 4.28e-05 9396.834 0.000 0.402 0.402
sigma2 1.265e-10 7.58e-11 1.669 0.095 -2.2e-11 2.75e-10
===================================================================================
Ljung-Box (L1) (Q): 66.79 Jarque-Bera (JB): 5900482.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -11.32
Prob(H) (two-sided): 0.00 Kurtosis: 421.81
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.07e+19. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.14359, saving model to LSTM7.h5
90/90 - 3s - loss: 0.1641 - mse: 0.1641 - mae: 0.2540 - val_loss: 0.1436 - val_mse: 0.1436 - val_mae: 0.3518 - lr: 0.0010 - 3s/epoch - 28ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.14359 to 0.01378, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0215 - mse: 0.0215 - mae: 0.1169 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0975 - lr: 0.0010 - 333ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.01378 to 0.01057, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0822 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0842 - lr: 0.0010 - 379ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.01057 to 0.00694, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0785 - val_loss: 0.0069 - val_mse: 0.0069 - val_mae: 0.0660 - lr: 0.0010 - 347ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.00694 to 0.00690, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0829 - val_loss: 0.0069 - val_mse: 0.0069 - val_mae: 0.0639 - lr: 0.0010 - 325ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0830 - val_loss: 0.0186 - val_mse: 0.0186 - val_mae: 0.1160 - lr: 0.0010 - 359ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0769 - val_loss: 0.0171 - val_mse: 0.0171 - val_mae: 0.1096 - lr: 0.0010 - 321ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0792 - val_loss: 0.0508 - val_mse: 0.0508 - val_mae: 0.2100 - lr: 0.0010 - 334ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00009: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0756 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1702 - lr: 0.0010 - 360ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0123 - mse: 0.0123 - mae: 0.0882 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1270 - lr: 1.0000e-04 - 319ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0627 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1296 - lr: 1.0000e-04 - 333ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0587 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1362 - lr: 1.0000e-04 - 362ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1534 - lr: 1.0000e-04 - 318ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00014: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0561 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1636 - lr: 1.0000e-04 - 324ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0568 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1623 - lr: 1.0000e-05 - 356ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1613 - lr: 1.0000e-05 - 330ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0543 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1607 - lr: 1.0000e-05 - 326ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1605 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00019: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1613 - lr: 1.0000e-05 - 324ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0557 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1611 - lr: 1.0000e-05 - 344ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1617 - lr: 1.0000e-05 - 353ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0542 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1629 - lr: 1.0000e-05 - 314ms/epoch - 3ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1644 - lr: 1.0000e-05 - 340ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1657 - lr: 1.0000e-05 - 372ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0556 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1665 - lr: 1.0000e-05 - 338ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0351 - val_mse: 0.0351 - val_mae: 0.1686 - lr: 1.0000e-05 - 355ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.0358 - val_mse: 0.0358 - val_mae: 0.1704 - lr: 1.0000e-05 - 345ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1720 - lr: 1.0000e-05 - 311ms/epoch - 3ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1744 - lr: 1.0000e-05 - 357ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1763 - lr: 1.0000e-05 - 338ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1790 - lr: 1.0000e-05 - 323ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0543 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1789 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0545 - val_loss: 0.0400 - val_mse: 0.0400 - val_mae: 0.1812 - lr: 1.0000e-05 - 341ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.0407 - val_mse: 0.0407 - val_mae: 0.1830 - lr: 1.0000e-05 - 330ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0413 - val_mse: 0.0413 - val_mae: 0.1846 - lr: 1.0000e-05 - 359ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0553 - val_loss: 0.0421 - val_mse: 0.0421 - val_mae: 0.1865 - lr: 1.0000e-05 - 340ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0540 - val_loss: 0.0430 - val_mse: 0.0430 - val_mae: 0.1887 - lr: 1.0000e-05 - 332ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.0440 - val_mse: 0.0440 - val_mae: 0.1910 - lr: 1.0000e-05 - 344ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0445 - val_mse: 0.0445 - val_mae: 0.1924 - lr: 1.0000e-05 - 323ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0506 - val_loss: 0.0465 - val_mse: 0.0465 - val_mae: 0.1969 - lr: 1.0000e-05 - 327ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.0467 - val_mse: 0.0467 - val_mae: 0.1973 - lr: 1.0000e-05 - 362ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0518 - val_loss: 0.0468 - val_mse: 0.0468 - val_mae: 0.1976 - lr: 1.0000e-05 - 316ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0538 - val_loss: 0.0472 - val_mse: 0.0472 - val_mae: 0.1986 - lr: 1.0000e-05 - 321ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.0486 - val_mse: 0.0486 - val_mae: 0.2017 - lr: 1.0000e-05 - 360ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0492 - val_mse: 0.0492 - val_mae: 0.2030 - lr: 1.0000e-05 - 320ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0543 - val_loss: 0.0481 - val_mse: 0.0481 - val_mae: 0.2004 - lr: 1.0000e-05 - 325ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0491 - val_mse: 0.0491 - val_mae: 0.2026 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0508 - val_mse: 0.0508 - val_mae: 0.2065 - lr: 1.0000e-05 - 314ms/epoch - 3ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.0525 - val_mse: 0.0525 - val_mae: 0.2103 - lr: 1.0000e-05 - 336ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0483 - val_loss: 0.0535 - val_mse: 0.0535 - val_mae: 0.2125 - lr: 1.0000e-05 - 356ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0533 - val_loss: 0.0545 - val_mse: 0.0545 - val_mae: 0.2148 - lr: 1.0000e-05 - 320ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0560 - val_mse: 0.0560 - val_mae: 0.2180 - lr: 1.0000e-05 - 327ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.0563 - val_mse: 0.0563 - val_mae: 0.2185 - lr: 1.0000e-05 - 355ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0526 - val_loss: 0.0565 - val_mse: 0.0565 - val_mae: 0.2189 - lr: 1.0000e-05 - 318ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00690
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0529 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.2210 - lr: 1.0000e-05 - 338ms/epoch - 4ms/step
Epoch 00055: early stopping
SMA Prediction vs Close: 50.0% Accuracy Prediction vs Prediction: 52.24% Accuracy MSE: 23.38002191723926 RMSE: 4.835289227878645 MAPE: 3.8675720673818827 EMA Prediction vs Close: 55.6% Accuracy Prediction vs Prediction: 51.49% Accuracy MSE: 35.056668726825066 RMSE: 5.920867227596399 MAPE: 4.704877912816018 WMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 44.87192646385527 RMSE: 6.698651092858566 MAPE: 5.33068935026581 DEMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 50.37% Accuracy MSE: 53.079656203261706 RMSE: 7.285578645739933 MAPE: 5.726487515550782 KAMA Prediction vs Close: 55.22% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 30.678794294842323 RMSE: 5.5388441298561855 MAPE: 4.336649130448084 MIDPOINT Prediction vs Close: 47.39% Accuracy Prediction vs Prediction: 44.78% Accuracy MSE: 19.38951232132957 RMSE: 4.4033523957695655 MAPE: 3.5042510250586574 T3 Prediction vs Close: 55.22% Accuracy Prediction vs Prediction: 52.61% Accuracy MSE: 90.72292612095576 RMSE: 9.524858325505727 MAPE: 7.398189805001564 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 48.88% Accuracy MSE: 46.925505559638836 RMSE: 6.850219380402268 MAPE: 5.67920042842036 Runtime: mins: 46.20308623633333
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment7.png to Experiment7 (2).png
img = cv2.imread('Experiment7.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fcea03e3f10>
with open('simulation7_data.json') as json_file:
simulation7 = json.load(json_file)
fileimg = 'Experiment7'
for i in range(len(list(simulation7.keys()))):
SIM = list(simulation7.keys())[i]
plot_train(simulation7,SIM)
plot_test(simulation7,SIM)
----- Train RMSE for SMA ----- 8.83785474825473 ----- Train_MSE_LSTM for SMA ----- 78.10767655124866 ----- Train MAE LSTM for SMA ----- 7.7078387567054225
----- Test RMSE for SMA----- 4.835289227878645 ----- Test_MSE_LSTM for SMA----- 23.38002191723926 ----- Test_MAE_LSTM for SMA----- 3.8675720673818827
----- Train RMSE for EMA ----- 10.650301459792807 ----- Train_MSE_LSTM for EMA ----- 113.4289211844648 ----- Train MAE LSTM for EMA ----- 9.515306107312588
----- Test RMSE for EMA----- 5.920867227596399 ----- Test_MSE_LSTM for EMA----- 35.056668726825066 ----- Test_MAE_LSTM for EMA----- 4.704877912816018
----- Train RMSE for WMA ----- 11.243993898135981 ----- Train_MSE_LSTM for WMA ----- 126.4273987813192 ----- Train MAE LSTM for WMA ----- 10.26125688382452
----- Test RMSE for WMA----- 6.698651092858566 ----- Test_MSE_LSTM for WMA----- 44.87192646385527 ----- Test_MAE_LSTM for WMA----- 5.33068935026581
----- Train RMSE for DEMA ----- 12.733126101758637 ----- Train_MSE_LSTM for DEMA ----- 162.1325003232871 ----- Train MAE LSTM for DEMA ----- 11.568946368623488
----- Test RMSE for DEMA----- 7.285578645739933 ----- Test_MSE_LSTM for DEMA----- 53.079656203261706 ----- Test_MAE_LSTM for DEMA----- 5.726487515550782
----- Train RMSE for KAMA ----- 10.74346361354885 ----- Train_MSE_LSTM for KAMA ----- 115.42201041564812 ----- Train MAE LSTM for KAMA ----- 9.724765787837049
----- Test RMSE for KAMA----- 5.5388441298561855 ----- Test_MSE_LSTM for KAMA----- 30.678794294842323 ----- Test_MAE_LSTM for KAMA----- 4.336649130448084
----- Train RMSE for MIDPOINT ----- 9.390036890135665 ----- Train_MSE_LSTM for MIDPOINT ----- 88.17279279810867 ----- Train MAE LSTM for MIDPOINT ----- 8.334413877029046
----- Test RMSE for MIDPOINT----- 4.4033523957695655 ----- Test_MSE_LSTM for MIDPOINT----- 19.38951232132957 ----- Test_MAE_LSTM for MIDPOINT----- 3.5042510250586574
----- Train RMSE for T3 ----- 12.38034945262767 ----- Train_MSE_LSTM for T3 ----- 153.27305256917825 ----- Train MAE LSTM for T3 ----- 11.240396228217806
----- Test RMSE for T3----- 9.524858325505727 ----- Test_MSE_LSTM for T3----- 90.72292612095576 ----- Test_MAE_LSTM for T3----- 7.398189805001564
----- Train RMSE for TEMA ----- 7.385672398559769 ----- Train_MSE_LSTM for TEMA ----- 54.548156778847606 ----- Train MAE LSTM for TEMA ----- 5.039816631869549
----- Test RMSE for TEMA----- 6.850219380402268 ----- Test_MSE_LSTM for TEMA----- 46.925505559638836 ----- Test_MAE_LSTM for TEMA----- 5.67920042842036
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det =20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma+' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
#
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma+' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# #Option 4
# # Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
model.add(LSTM(units=int(lstm_len/2)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam')
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM8.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation8 = {}
imgfile = 'Experiment8'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation8[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation8_data.json', 'w') as fp:
json.dump(simulation8, fp)
for ma in simulation8.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation8[ma]['final']['mse'],
'\nRMSE:\t', simulation8[ma]['final']['rmse'],
'\nMAPE:\t', simulation8[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-14771.778, Time=11.98 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14135.387, Time=5.95 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15280.870, Time=10.09 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15393.475, Time=8.27 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-14981.217, Time=4.96 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14516.868, Time=13.65 sec
ARIMA(0,3,1)(0,0,0)[0] intercept : AIC=-15663.967, Time=10.14 sec
ARIMA(0,3,0)(0,0,0)[0] intercept : AIC=-13838.679, Time=5.14 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-14734.479, Time=6.22 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-14866.409, Time=8.20 sec
ARIMA(1,3,0)(0,0,0)[0] intercept : AIC=-16157.403, Time=13.60 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-14855.623, Time=10.63 sec
ARIMA(2,3,1)(0,0,0)[0] intercept : AIC=-14720.644, Time=11.20 sec
Best model: ARIMA(1,3,0)(0,0,0)[0] intercept
Total fit time: 120.058 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 0) Log Likelihood 8103.701
Date: Sun, 12 Dec 2021 AIC -16157.403
Time: 19:41:55 BIC -16040.132
Sample: 0 HQIC -16112.366
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept -2.802e-06 7.54e-07 -3.714 0.000 -4.28e-06 -1.32e-06
x1 -2.598e-05 0.001 -0.041 0.967 -0.001 0.001
x2 -2.599e-05 0.001 -0.047 0.963 -0.001 0.001
x3 -2.615e-05 0.001 -0.038 0.970 -0.001 0.001
x4 1.0000 0.001 1507.083 0.000 0.999 1.001
x5 -2.485e-05 0.001 -0.038 0.970 -0.001 0.001
x6 -2.807e-05 3.32e-05 -0.845 0.398 -9.32e-05 3.71e-05
x7 -2.593e-05 8.29e-05 -0.313 0.755 -0.000 0.000
x8 0.0019 7.15e-05 26.753 0.000 0.002 0.002
x9 -1.867e-06 0.001 -0.003 0.998 -0.001 0.001
x10 0.0003 0.000 0.644 0.520 -0.001 0.001
x11 -0.0025 8.93e-05 -28.145 0.000 -0.003 -0.002
x12 0.0015 8.06e-05 18.290 0.000 0.001 0.002
x13 -2.61e-05 0.000 -0.076 0.939 -0.001 0.001
x14 -7.719e-05 0.000 -0.374 0.708 -0.000 0.000
x15 -2.829e-05 8.57e-05 -0.330 0.741 -0.000 0.000
x16 -2.424e-05 0.000 -0.142 0.887 -0.000 0.000
x17 -2.292e-05 9.81e-05 -0.234 0.815 -0.000 0.000
x18 -4.39e-05 0.000 -0.429 0.668 -0.000 0.000
x19 -3.005e-05 0.000 -0.293 0.770 -0.000 0.000
x20 4.559e-05 9.36e-05 0.487 0.626 -0.000 0.000
x21 -7.981e-10 0.001 -9.88e-07 1.000 -0.002 0.002
x22 -1.557e-08 0.000 -0.000 1.000 -0.000 0.000
ar.L1 -0.6667 6.95e-05 -9587.073 0.000 -0.667 -0.667
sigma2 1.314e-10 7.8e-11 1.686 0.092 -2.14e-11 2.84e-10
===================================================================================
Ljung-Box (L1) (Q): 90.59 Jarque-Bera (JB): 3138023.60
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 5.01
Prob(H) (two-sided): 0.00 Kurtosis: 308.71
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.36e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05343, saving model to LSTM8.h5
48/48 - 3s - loss: 1.4281 - val_loss: 0.0534 - lr: 0.0010 - 3s/epoch - 68ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.3562 - val_loss: 0.0544 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.2612 - val_loss: 0.0554 - lr: 0.0010 - 212ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.1724 - val_loss: 0.0575 - lr: 0.0010 - 214ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.0993 - val_loss: 0.0612 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.0431 - val_loss: 0.0654 - lr: 0.0010 - 238ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.0138 - val_loss: 0.0658 - lr: 1.0000e-04 - 233ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.0096 - val_loss: 0.0662 - lr: 1.0000e-04 - 217ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.0054 - val_loss: 0.0667 - lr: 1.0000e-04 - 218ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05343
48/48 - 0s - loss: 1.0014 - val_loss: 0.0672 - lr: 1.0000e-04 - 230ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9973 - val_loss: 0.0676 - lr: 1.0000e-04 - 242ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9949 - val_loss: 0.0677 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9945 - val_loss: 0.0677 - lr: 1.0000e-05 - 216ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9941 - val_loss: 0.0678 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9937 - val_loss: 0.0678 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9933 - val_loss: 0.0679 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9929 - val_loss: 0.0679 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9925 - val_loss: 0.0680 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9921 - val_loss: 0.0680 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9917 - val_loss: 0.0681 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9913 - val_loss: 0.0681 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9909 - val_loss: 0.0682 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9904 - val_loss: 0.0682 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9900 - val_loss: 0.0683 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9896 - val_loss: 0.0684 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9892 - val_loss: 0.0684 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9888 - val_loss: 0.0685 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9884 - val_loss: 0.0685 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9880 - val_loss: 0.0686 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9876 - val_loss: 0.0686 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9872 - val_loss: 0.0687 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9868 - val_loss: 0.0688 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9864 - val_loss: 0.0688 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9860 - val_loss: 0.0689 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9856 - val_loss: 0.0689 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9852 - val_loss: 0.0690 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9848 - val_loss: 0.0690 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9844 - val_loss: 0.0691 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9840 - val_loss: 0.0692 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9836 - val_loss: 0.0692 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9832 - val_loss: 0.0693 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9828 - val_loss: 0.0693 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9824 - val_loss: 0.0694 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9820 - val_loss: 0.0695 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9816 - val_loss: 0.0695 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9812 - val_loss: 0.0696 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9808 - val_loss: 0.0696 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9803 - val_loss: 0.0697 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9799 - val_loss: 0.0698 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9795 - val_loss: 0.0698 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05343
48/48 - 0s - loss: 0.9791 - val_loss: 0.0699 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.831, Time=2.39 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.29 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16288.946, Time=7.13 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=6.21 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16226.419, Time=11.44 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-13742.844, Time=8.55 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16101.256, Time=19.59 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17006.489, Time=2.86 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.686, Time=2.96 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17086.654, Time=6.14 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=-16097.512, Time=15.65 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17002.132, Time=3.66 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-17004.011, Time=3.85 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 94.752 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8570.327
Date: Sun, 12 Dec 2021 AIC -17086.654
Time: 19:44:27 BIC -16960.001
Sample: 0 HQIC -17038.014
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.333e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x2 -2.326e-10 9.29e-21 -2.5e+10 0.000 -2.33e-10 -2.33e-10
x3 -2.342e-10 9.32e-21 -2.51e+10 0.000 -2.34e-10 -2.34e-10
x4 1.0000 9.31e-21 1.07e+20 0.000 1.000 1.000
x5 -2.121e-10 8.87e-21 -2.39e+10 0.000 -2.12e-10 -2.12e-10
x6 -8.055e-10 1.64e-20 -4.9e+10 0.000 -8.05e-10 -8.05e-10
x7 -2.312e-10 9.27e-21 -2.49e+10 0.000 -2.31e-10 -2.31e-10
x8 -2.26e-10 9.17e-21 -2.47e+10 0.000 -2.26e-10 -2.26e-10
x9 -1.174e-11 1.86e-21 -6.3e+09 0.000 -1.17e-11 -1.17e-11
x10 -4.486e-11 3.98e-21 -1.13e+10 0.000 -4.49e-11 -4.49e-11
x11 -2.235e-10 9.11e-21 -2.45e+10 0.000 -2.23e-10 -2.23e-10
x12 -2.28e-10 9.21e-21 -2.48e+10 0.000 -2.28e-10 -2.28e-10
x13 -2.332e-10 9.31e-21 -2.51e+10 0.000 -2.33e-10 -2.33e-10
x14 -1.78e-09 2.57e-20 -6.92e+10 0.000 -1.78e-09 -1.78e-09
x15 -2.118e-10 8.84e-21 -2.4e+10 0.000 -2.12e-10 -2.12e-10
x16 -5.28e-10 1.4e-20 -3.76e+10 0.000 -5.28e-10 -5.28e-10
x17 -2.173e-10 8.94e-21 -2.43e+10 0.000 -2.17e-10 -2.17e-10
x18 -3.83e-11 3.74e-21 -1.02e+10 0.000 -3.83e-11 -3.83e-11
x19 -2.606e-10 9.86e-21 -2.64e+10 0.000 -2.61e-10 -2.61e-10
x20 -2.433e-10 9.48e-21 -2.57e+10 0.000 -2.43e-10 -2.43e-10
x21 -3.774e-13 1.42e-24 -2.65e+11 0.000 -3.77e-13 -3.77e-13
x22 -1.096e-11 1.35e-24 -8.11e+12 0.000 -1.1e-11 -1.1e-11
ar.L1 -0.4919 1.5e-22 -3.27e+21 0.000 -0.492 -0.492
ar.L2 -0.1922 8.41e-23 -2.28e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.01e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7070 3.34e-22 -2.12e+21 0.000 -0.707 -0.707
sigma2 8.977e-11 6.95e-11 1.291 0.197 -4.65e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.80 Jarque-Bera (JB): 4212163.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.43
Prob(H) (two-sided): 0.00 Kurtosis: 357.21
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.65e+43. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04985, saving model to LSTM8.h5
16/16 - 4s - loss: 1.4011 - val_loss: 0.0499 - lr: 0.0010 - 4s/epoch - 245ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.3618 - val_loss: 0.0509 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.3261 - val_loss: 0.0519 - lr: 0.0010 - 90ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2938 - val_loss: 0.0529 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2643 - val_loss: 0.0541 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2368 - val_loss: 0.0554 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2195 - val_loss: 0.0555 - lr: 1.0000e-04 - 103ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2169 - val_loss: 0.0557 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2144 - val_loss: 0.0558 - lr: 1.0000e-04 - 86ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2118 - val_loss: 0.0560 - lr: 1.0000e-04 - 89ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2093 - val_loss: 0.0561 - lr: 1.0000e-04 - 91ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2076 - val_loss: 0.0561 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2074 - val_loss: 0.0562 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2071 - val_loss: 0.0562 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2068 - val_loss: 0.0562 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2066 - val_loss: 0.0562 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2063 - val_loss: 0.0562 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2061 - val_loss: 0.0562 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2058 - val_loss: 0.0563 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2056 - val_loss: 0.0563 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2053 - val_loss: 0.0563 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2050 - val_loss: 0.0563 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2048 - val_loss: 0.0563 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2045 - val_loss: 0.0563 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2043 - val_loss: 0.0563 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2040 - val_loss: 0.0564 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2038 - val_loss: 0.0564 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2035 - val_loss: 0.0564 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2032 - val_loss: 0.0564 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2030 - val_loss: 0.0564 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2027 - val_loss: 0.0564 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2025 - val_loss: 0.0565 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2022 - val_loss: 0.0565 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2019 - val_loss: 0.0565 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2017 - val_loss: 0.0565 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2014 - val_loss: 0.0565 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2012 - val_loss: 0.0565 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2009 - val_loss: 0.0565 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2006 - val_loss: 0.0566 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2004 - val_loss: 0.0566 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.2001 - val_loss: 0.0566 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1998 - val_loss: 0.0566 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1996 - val_loss: 0.0566 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1993 - val_loss: 0.0566 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1991 - val_loss: 0.0566 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1988 - val_loss: 0.0567 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1985 - val_loss: 0.0567 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1983 - val_loss: 0.0567 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1980 - val_loss: 0.0567 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1978 - val_loss: 0.0567 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04985
16/16 - 0s - loss: 1.1975 - val_loss: 0.0567 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.227409389726965
RMSE: 6.018920948951479
MAPE: 4.70810831106621
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16080.357, Time=11.09 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14973.799, Time=6.43 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15549.629, Time=1.91 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.999, Time=8.51 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16061.924, Time=9.72 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15376.406, Time=14.43 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16186.215, Time=3.58 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15308.706, Time=14.04 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-14920.393, Time=15.30 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16184.203, Time=3.42 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 88.460 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8118.107
Date: Sun, 12 Dec 2021 AIC -16186.215
Time: 19:53:57 BIC -16068.944
Sample: 0 HQIC -16141.178
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -9.919e-15 0.000 -8.4e-11 1.000 -0.000 0.000
x2 3.194e-15 6.3e-05 5.07e-11 1.000 -0.000 0.000
x3 3.066e-15 7.71e-05 3.98e-11 1.000 -0.000 0.000
x4 1.0000 4.4e-05 2.27e+04 0.000 1.000 1.000
x5 -3.977e-15 4.68e-05 -8.49e-11 1.000 -9.18e-05 9.18e-05
x6 -5.906e-17 8.34e-05 -7.08e-13 1.000 -0.000 0.000
x7 -8.726e-15 7.85e-05 -1.11e-10 1.000 -0.000 0.000
x8 0.0014 4.94e-05 27.704 0.000 0.001 0.001
x9 -3.542e-15 0.001 -2.63e-12 1.000 -0.003 0.003
x10 -0.0012 0.001 -1.566 0.117 -0.003 0.000
x11 0.0052 3.01e-05 172.396 0.000 0.005 0.005
x12 -0.0065 0.000 -49.747 0.000 -0.007 -0.006
x13 1.963e-14 7.85e-05 2.5e-10 1.000 -0.000 0.000
x14 -2.134e-14 0.000 -1.01e-10 1.000 -0.000 0.000
x15 3.464e-12 0.000 2.92e-08 1.000 -0.000 0.000
x16 -7.174e-13 6.45e-05 -1.11e-08 1.000 -0.000 0.000
x17 2.537e-13 7.42e-05 3.42e-09 1.000 -0.000 0.000
x18 -2.964e-15 0.000 -7.78e-12 1.000 -0.001 0.001
x19 -3.613e-12 8.67e-05 -4.17e-08 1.000 -0.000 0.000
x20 6.244e-14 0.000 2.1e-10 1.000 -0.001 0.001
x21 -4.242e-16 0.000 -1.47e-12 1.000 -0.001 0.001
x22 -2.128e-15 0.001 -1.74e-12 1.000 -0.002 0.002
ma.L1 -1.3894 4.16e-05 -3.34e+04 0.000 -1.389 -1.389
ma.L2 0.4036 0.000 3637.465 0.000 0.403 0.404
sigma2 1.287e-10 7.27e-11 1.770 0.077 -1.38e-11 2.71e-10
===================================================================================
Ljung-Box (L1) (Q): 69.00 Jarque-Bera (JB): 6269147.49
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.07
Prob(H) (two-sided): 0.00 Kurtosis: 434.65
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.47e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04774, saving model to LSTM8.h5
17/17 - 4s - loss: 1.4119 - val_loss: 0.0477 - lr: 0.0010 - 4s/epoch - 223ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.3900 - val_loss: 0.0486 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.3678 - val_loss: 0.0495 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.3442 - val_loss: 0.0506 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.3196 - val_loss: 0.0517 - lr: 0.0010 - 89ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2946 - val_loss: 0.0530 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2783 - val_loss: 0.0531 - lr: 1.0000e-04 - 93ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2759 - val_loss: 0.0532 - lr: 1.0000e-04 - 101ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2735 - val_loss: 0.0534 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2712 - val_loss: 0.0535 - lr: 1.0000e-04 - 91ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2688 - val_loss: 0.0536 - lr: 1.0000e-04 - 93ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2673 - val_loss: 0.0536 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2671 - val_loss: 0.0536 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2669 - val_loss: 0.0537 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2667 - val_loss: 0.0537 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2664 - val_loss: 0.0537 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2662 - val_loss: 0.0537 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2660 - val_loss: 0.0537 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2658 - val_loss: 0.0537 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2655 - val_loss: 0.0537 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2653 - val_loss: 0.0537 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2651 - val_loss: 0.0538 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2649 - val_loss: 0.0538 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2646 - val_loss: 0.0538 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2644 - val_loss: 0.0538 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2642 - val_loss: 0.0538 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2640 - val_loss: 0.0538 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2637 - val_loss: 0.0538 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2635 - val_loss: 0.0539 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2633 - val_loss: 0.0539 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2631 - val_loss: 0.0539 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2629 - val_loss: 0.0539 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2626 - val_loss: 0.0539 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2624 - val_loss: 0.0539 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2622 - val_loss: 0.0539 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2620 - val_loss: 0.0540 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2617 - val_loss: 0.0540 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2615 - val_loss: 0.0540 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2613 - val_loss: 0.0540 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2611 - val_loss: 0.0540 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2609 - val_loss: 0.0540 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2606 - val_loss: 0.0541 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2604 - val_loss: 0.0541 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2602 - val_loss: 0.0541 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2600 - val_loss: 0.0541 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2598 - val_loss: 0.0541 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2595 - val_loss: 0.0541 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2593 - val_loss: 0.0541 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2591 - val_loss: 0.0542 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2589 - val_loss: 0.0542 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04774
17/17 - 0s - loss: 1.2586 - val_loss: 0.0542 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.227409389726965
RMSE: 6.018920948951479
MAPE: 4.70810831106621
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 47.6040579193988
RMSE: 6.8995694010132835
MAPE: 5.522605601178568
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.780, Time=2.79 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.30 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15584.877, Time=8.35 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=5.71 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15271.475, Time=7.68 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15128.422, Time=9.97 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16352.675, Time=17.53 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.022, Time=4.81 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17002.621, Time=3.21 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.445, Time=6.37 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=17.03 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17001.997, Time=3.96 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16996.668, Time=4.14 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 95.868 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.723
Date: Sun, 12 Dec 2021 AIC -17085.445
Time: 19:59:36 BIC -16958.792
Sample: 0 HQIC -17036.805
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 1.36e-20 -2.05e+10 0.000 -2.8e-10 -2.8e-10
x2 -2.817e-10 1.37e-20 -2.06e+10 0.000 -2.82e-10 -2.82e-10
x3 -2.805e-10 1.36e-20 -2.06e+10 0.000 -2.8e-10 -2.8e-10
x4 1.0000 1.37e-20 7.33e+19 0.000 1.000 1.000
x5 -2.598e-10 1.31e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x6 -1.389e-09 2.98e-20 -4.66e+10 0.000 -1.39e-09 -1.39e-09
x7 -2.789e-10 1.36e-20 -2.05e+10 0.000 -2.79e-10 -2.79e-10
x8 -2.761e-10 1.35e-20 -2.04e+10 0.000 -2.76e-10 -2.76e-10
x9 -2.219e-12 3.36e-22 -6.6e+09 0.000 -2.22e-12 -2.22e-12
x10 -1.345e-10 9.37e-21 -1.43e+10 0.000 -1.34e-10 -1.34e-10
x11 -2.899e-10 1.39e-20 -2.09e+10 0.000 -2.9e-10 -2.9e-10
x12 -2.602e-10 1.32e-20 -1.98e+10 0.000 -2.6e-10 -2.6e-10
x13 -2.807e-10 1.36e-20 -2.06e+10 0.000 -2.81e-10 -2.81e-10
x14 -1.87e-09 3.52e-20 -5.31e+10 0.000 -1.87e-09 -1.87e-09
x15 -2.825e-10 1.37e-20 -2.07e+10 0.000 -2.82e-10 -2.82e-10
x16 -8.187e-11 7.33e-21 -1.12e+10 0.000 -8.19e-11 -8.19e-11
x17 -2.441e-10 1.27e-20 -1.92e+10 0.000 -2.44e-10 -2.44e-10
x18 -6.411e-10 2.06e-20 -3.11e+10 0.000 -6.41e-10 -6.41e-10
x19 -2.929e-10 1.39e-20 -2.11e+10 0.000 -2.93e-10 -2.93e-10
x20 -4.339e-10 1.7e-20 -2.56e+10 0.000 -4.34e-10 -4.34e-10
x21 -3.589e-13 2.52e-24 -1.42e+11 0.000 -3.59e-13 -3.59e-13
x22 -1.088e-11 2.36e-24 -4.6e+12 0.000 -1.09e-11 -1.09e-11
ar.L1 -0.4923 1.46e-22 -3.37e+21 0.000 -0.492 -0.492
ar.L2 -0.1923 8.47e-23 -2.27e+21 0.000 -0.192 -0.192
ar.L3 -0.0462 4.02e-23 -1.15e+21 0.000 -0.046 -0.046
ma.L1 -0.7077 3.31e-22 -2.14e+21 0.000 -0.708 -0.708
sigma2 8.99e-11 6.95e-11 1.293 0.196 -4.64e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 55.15 Jarque-Bera (JB): 4171184.78
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.27
Prob(H) (two-sided): 0.00 Kurtosis: 355.49
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.53e+42. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04236, saving model to LSTM8.h5
10/10 - 4s - loss: 1.4274 - val_loss: 0.0424 - lr: 0.0010 - 4s/epoch - 357ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.3956 - val_loss: 0.0430 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.3644 - val_loss: 0.0438 - lr: 0.0010 - 60ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.3323 - val_loss: 0.0445 - lr: 0.0010 - 61ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2994 - val_loss: 0.0453 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2661 - val_loss: 0.0462 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2424 - val_loss: 0.0463 - lr: 1.0000e-04 - 62ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2392 - val_loss: 0.0464 - lr: 1.0000e-04 - 61ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2360 - val_loss: 0.0465 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2329 - val_loss: 0.0465 - lr: 1.0000e-04 - 74ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2298 - val_loss: 0.0466 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2277 - val_loss: 0.0466 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2274 - val_loss: 0.0467 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2271 - val_loss: 0.0467 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2268 - val_loss: 0.0467 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2265 - val_loss: 0.0467 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2262 - val_loss: 0.0467 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2259 - val_loss: 0.0467 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2256 - val_loss: 0.0467 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2253 - val_loss: 0.0467 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2250 - val_loss: 0.0467 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2247 - val_loss: 0.0467 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2244 - val_loss: 0.0467 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2241 - val_loss: 0.0468 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2238 - val_loss: 0.0468 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2235 - val_loss: 0.0468 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2232 - val_loss: 0.0468 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2229 - val_loss: 0.0468 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2226 - val_loss: 0.0468 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2223 - val_loss: 0.0468 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2220 - val_loss: 0.0468 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2217 - val_loss: 0.0468 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2215 - val_loss: 0.0468 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2212 - val_loss: 0.0468 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2209 - val_loss: 0.0469 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2206 - val_loss: 0.0469 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2203 - val_loss: 0.0469 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2200 - val_loss: 0.0469 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2197 - val_loss: 0.0469 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2194 - val_loss: 0.0469 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2191 - val_loss: 0.0469 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2188 - val_loss: 0.0469 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2185 - val_loss: 0.0469 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2183 - val_loss: 0.0469 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2180 - val_loss: 0.0469 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2177 - val_loss: 0.0470 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2174 - val_loss: 0.0470 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2171 - val_loss: 0.0470 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2168 - val_loss: 0.0470 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2165 - val_loss: 0.0470 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04236
10/10 - 0s - loss: 1.2162 - val_loss: 0.0470 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.227409389726965
RMSE: 6.018920948951479
MAPE: 4.70810831106621
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 47.6040579193988
RMSE: 6.8995694010132835
MAPE: 5.522605601178568
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 155.67335739769726
RMSE: 12.476912975479843
MAPE: 11.236116903479964
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17059.325, Time=4.12 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.593, Time=4.50 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16133.019, Time=6.20 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.593, Time=6.06 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16091.980, Time=7.55 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16009.844, Time=12.63 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-15757.180, Time=9.74 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17029.439, Time=4.68 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-17000.917, Time=4.00 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=45.027, Time=4.74 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 64.246 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8554.662
Date: Sun, 12 Dec 2021 AIC -17059.325
Time: 20:08:53 BIC -16942.054
Sample: 0 HQIC -17014.288
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.409e-10 5.52e-21 -2.55e+10 0.000 -1.41e-10 -1.41e-10
x2 -1.378e-10 5.47e-21 -2.52e+10 0.000 -1.38e-10 -1.38e-10
x3 -1.323e-10 5.35e-21 -2.47e+10 0.000 -1.32e-10 -1.32e-10
x4 1.0000 5.41e-21 1.85e+20 0.000 1.000 1.000
x5 -1.221e-10 5.15e-21 -2.37e+10 0.000 -1.22e-10 -1.22e-10
x6 -8.465e-10 1.3e-20 -6.53e+10 0.000 -8.47e-10 -8.47e-10
x7 -1.3e-10 5.32e-21 -2.44e+10 0.000 -1.3e-10 -1.3e-10
x8 -1.267e-10 5.27e-21 -2.41e+10 0.000 -1.27e-10 -1.27e-10
x9 -2.032e-11 6.67e-22 -3.05e+10 0.000 -2.03e-11 -2.03e-11
x10 -5.319e-11 2.3e-21 -2.31e+10 0.000 -5.32e-11 -5.32e-11
x11 -1.275e-10 5.28e-21 -2.42e+10 0.000 -1.28e-10 -1.28e-10
x12 -1.262e-10 5.23e-21 -2.41e+10 0.000 -1.26e-10 -1.26e-10
x13 -1.339e-10 5.39e-21 -2.49e+10 0.000 -1.34e-10 -1.34e-10
x14 -1.092e-09 1.55e-20 -7.06e+10 0.000 -1.09e-09 -1.09e-09
x15 -1.342e-10 5.42e-21 -2.48e+10 0.000 -1.34e-10 -1.34e-10
x16 -2.01e-10 6.63e-21 -3.03e+10 0.000 -2.01e-10 -2.01e-10
x17 -1.144e-10 5.01e-21 -2.29e+10 0.000 -1.14e-10 -1.14e-10
x18 -9.245e-11 4.49e-21 -2.06e+10 0.000 -9.24e-11 -9.24e-11
x19 -1.646e-10 6.01e-21 -2.74e+10 0.000 -1.65e-10 -1.65e-10
x20 -2.482e-10 7.35e-21 -3.37e+10 0.000 -2.48e-10 -2.48e-10
x21 -3.385e-12 3.14e-24 -1.08e+12 0.000 -3.39e-12 -3.39e-12
x22 -8.066e-11 2.47e-23 -3.26e+12 0.000 -8.07e-11 -8.07e-11
ar.L1 -0.2877 2.48e-22 -1.16e+21 0.000 -0.288 -0.288
ma.L1 -0.9134 1.05e-21 -8.7e+20 0.000 -0.913 -0.913
sigma2 9.332e-11 6.96e-11 1.340 0.180 -4.32e-11 2.3e-10
===================================================================================
Ljung-Box (L1) (Q): 84.37 Jarque-Bera (JB): 4308764.36
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.22
Prob(H) (two-sided): 0.00 Kurtosis: 361.26
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.32e+42. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05262, saving model to LSTM8.h5
45/45 - 4s - loss: 1.4483 - val_loss: 0.0526 - lr: 0.0010 - 4s/epoch - 79ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.3960 - val_loss: 0.0542 - lr: 0.0010 - 216ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.3557 - val_loss: 0.0557 - lr: 0.0010 - 212ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.3084 - val_loss: 0.0584 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.2412 - val_loss: 0.0628 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.1638 - val_loss: 0.0689 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.1185 - val_loss: 0.0696 - lr: 1.0000e-04 - 217ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.1122 - val_loss: 0.0703 - lr: 1.0000e-04 - 221ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.1062 - val_loss: 0.0710 - lr: 1.0000e-04 - 219ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.1005 - val_loss: 0.0718 - lr: 1.0000e-04 - 214ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0948 - val_loss: 0.0725 - lr: 1.0000e-04 - 214ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0913 - val_loss: 0.0726 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0907 - val_loss: 0.0727 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0902 - val_loss: 0.0727 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0896 - val_loss: 0.0728 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0891 - val_loss: 0.0729 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0885 - val_loss: 0.0730 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0880 - val_loss: 0.0730 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0874 - val_loss: 0.0731 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0869 - val_loss: 0.0732 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0863 - val_loss: 0.0733 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0858 - val_loss: 0.0734 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0852 - val_loss: 0.0734 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0847 - val_loss: 0.0735 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0841 - val_loss: 0.0736 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0836 - val_loss: 0.0737 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0830 - val_loss: 0.0738 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0824 - val_loss: 0.0739 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0819 - val_loss: 0.0739 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0813 - val_loss: 0.0740 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0808 - val_loss: 0.0741 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0802 - val_loss: 0.0742 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0797 - val_loss: 0.0743 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0791 - val_loss: 0.0744 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0785 - val_loss: 0.0744 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0780 - val_loss: 0.0745 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0774 - val_loss: 0.0746 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0769 - val_loss: 0.0747 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0763 - val_loss: 0.0748 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0757 - val_loss: 0.0749 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0752 - val_loss: 0.0750 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0746 - val_loss: 0.0750 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0740 - val_loss: 0.0751 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0735 - val_loss: 0.0752 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0729 - val_loss: 0.0753 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0723 - val_loss: 0.0754 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0718 - val_loss: 0.0755 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0712 - val_loss: 0.0755 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0706 - val_loss: 0.0756 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0701 - val_loss: 0.0757 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05262
45/45 - 0s - loss: 1.0695 - val_loss: 0.0758 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.227409389726965
RMSE: 6.018920948951479
MAPE: 4.70810831106621
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 47.6040579193988
RMSE: 6.8995694010132835
MAPE: 5.522605601178568
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 155.67335739769726
RMSE: 12.476912975479843
MAPE: 11.236116903479964
KAMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 19.84916238053282
RMSE: 4.45523987912355
MAPE: 3.572304554405335
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-17003.733, Time=2.57 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14572.592, Time=4.14 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15587.551, Time=7.50 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14570.592, Time=5.87 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16365.334, Time=9.84 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16163.760, Time=13.20 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16245.181, Time=13.32 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17028.017, Time=4.78 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-17106.133, Time=5.54 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17085.425, Time=6.93 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=-17000.553, Time=3.98 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 77.702 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood 8579.066
Date: Sun, 12 Dec 2021 AIC -17106.133
Time: 20:13:15 BIC -16984.171
Sample: 0 HQIC -17059.294
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -3.048e-10 1.69e-20 -1.8e+10 0.000 -3.05e-10 -3.05e-10
x2 -3.042e-10 1.75e-20 -1.74e+10 0.000 -3.04e-10 -3.04e-10
x3 -3.108e-10 1.62e-20 -1.92e+10 0.000 -3.11e-10 -3.11e-10
x4 1.0000 1.69e-20 5.91e+19 0.000 1.000 1.000
x5 -2.767e-10 1.61e-20 -1.72e+10 0.000 -2.77e-10 -2.77e-10
x6 -6.072e-09 1.38e-19 -4.42e+10 0.000 -6.07e-09 -6.07e-09
x7 -2.8e-10 1.62e-20 -1.73e+10 0.000 -2.8e-10 -2.8e-10
x8 -2.792e-10 1.65e-20 -1.69e+10 0.000 -2.79e-10 -2.79e-10
x9 -1.502e-10 1.02e-21 -1.48e+11 0.000 -1.5e-10 -1.5e-10
x10 -2.482e-10 4.3e-21 -5.77e+10 0.000 -2.48e-10 -2.48e-10
x11 -2.764e-10 1.64e-20 -1.69e+10 0.000 -2.76e-10 -2.76e-10
x12 -2.857e-10 1.64e-20 -1.74e+10 0.000 -2.86e-10 -2.86e-10
x13 -2.944e-10 1.66e-20 -1.77e+10 0.000 -2.94e-10 -2.94e-10
x14 -2.403e-09 4.86e-20 -4.95e+10 0.000 -2.4e-09 -2.4e-09
x15 -3.368e-10 1.81e-20 -1.86e+10 0.000 -3.37e-10 -3.37e-10
x16 -2.169e-10 1.45e-20 -1.49e+10 0.000 -2.17e-10 -2.17e-10
x17 -2.124e-10 1.44e-20 -1.47e+10 0.000 -2.12e-10 -2.12e-10
x18 -9.125e-10 2.98e-20 -3.06e+10 0.000 -9.13e-10 -9.13e-10
x19 -3.698e-10 1.9e-20 -1.95e+10 0.000 -3.7e-10 -3.7e-10
x20 -8.9e-10 2.94e-20 -3.03e+10 0.000 -8.9e-10 -8.9e-10
x21 -1.844e-11 1.86e-22 -9.9e+10 0.000 -1.84e-11 -1.84e-11
x22 -2.169e-10 5.04e-22 -4.3e+11 0.000 -2.17e-10 -2.17e-10
ar.L1 -1.2011 7.4e-23 -1.62e+22 0.000 -1.201 -1.201
ar.L2 -0.9017 1.51e-22 -5.98e+21 0.000 -0.902 -0.902
ar.L3 -0.4014 9.48e-23 -4.23e+21 0.000 -0.401 -0.401
sigma2 8.782e-11 6.95e-11 1.264 0.206 -4.84e-11 2.24e-10
===================================================================================
Ljung-Box (L1) (Q): 3.61 Jarque-Bera (JB): 16191.93
Prob(Q): 0.06 Prob(JB): 0.00
Heteroskedasticity (H): 0.35 Skew: 0.59
Prob(H) (two-sided): 0.00 Kurtosis: 24.94
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.23e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04423, saving model to LSTM8.h5
58/58 - 4s - loss: 1.2900 - val_loss: 0.0442 - lr: 0.0010 - 4s/epoch - 72ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04423
58/58 - 0s - loss: 1.1398 - val_loss: 0.0484 - lr: 0.0010 - 277ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04423
58/58 - 0s - loss: 1.0330 - val_loss: 0.0527 - lr: 0.0010 - 258ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.9499 - val_loss: 0.0571 - lr: 0.0010 - 270ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8869 - val_loss: 0.0616 - lr: 0.0010 - 257ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8390 - val_loss: 0.0662 - lr: 0.0010 - 259ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8147 - val_loss: 0.0666 - lr: 1.0000e-04 - 250ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8111 - val_loss: 0.0671 - lr: 1.0000e-04 - 264ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8075 - val_loss: 0.0676 - lr: 1.0000e-04 - 289ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8039 - val_loss: 0.0681 - lr: 1.0000e-04 - 266ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.8004 - val_loss: 0.0687 - lr: 1.0000e-04 - 265ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7982 - val_loss: 0.0687 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7978 - val_loss: 0.0688 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7975 - val_loss: 0.0688 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7971 - val_loss: 0.0689 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7967 - val_loss: 0.0690 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7964 - val_loss: 0.0690 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7960 - val_loss: 0.0691 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7956 - val_loss: 0.0692 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7952 - val_loss: 0.0692 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7948 - val_loss: 0.0693 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7944 - val_loss: 0.0694 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7940 - val_loss: 0.0694 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7937 - val_loss: 0.0695 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7933 - val_loss: 0.0696 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7929 - val_loss: 0.0697 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7925 - val_loss: 0.0697 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7921 - val_loss: 0.0698 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7917 - val_loss: 0.0699 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7912 - val_loss: 0.0700 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7908 - val_loss: 0.0701 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7904 - val_loss: 0.0702 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7900 - val_loss: 0.0702 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7896 - val_loss: 0.0703 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7892 - val_loss: 0.0704 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7888 - val_loss: 0.0705 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7884 - val_loss: 0.0706 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7880 - val_loss: 0.0707 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7876 - val_loss: 0.0708 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7872 - val_loss: 0.0709 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7868 - val_loss: 0.0710 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7864 - val_loss: 0.0711 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7860 - val_loss: 0.0712 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7855 - val_loss: 0.0713 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7851 - val_loss: 0.0714 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7847 - val_loss: 0.0715 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7843 - val_loss: 0.0716 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7839 - val_loss: 0.0717 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7835 - val_loss: 0.0718 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7831 - val_loss: 0.0719 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04423
58/58 - 0s - loss: 0.7827 - val_loss: 0.0720 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.227409389726965
RMSE: 6.018920948951479
MAPE: 4.70810831106621
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 47.6040579193988
RMSE: 6.8995694010132835
MAPE: 5.522605601178568
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 155.67335739769726
RMSE: 12.476912975479843
MAPE: 11.236116903479964
KAMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 19.84916238053282
RMSE: 4.45523987912355
MAPE: 3.572304554405335
MIDPOINT
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 21.813261360012138
RMSE: 4.67046693169025
MAPE: 3.6223152079979295
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16954.347, Time=2.93 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14725.736, Time=2.39 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16732.390, Time=8.12 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15913.358, Time=7.01 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16550.077, Time=10.42 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15004.835, Time=9.62 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16027.273, Time=10.33 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16934.995, Time=2.68 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16924.758, Time=3.47 sec
ARIMA(1,3,1)(0,0,0)[0] intercept : AIC=-16952.347, Time=2.52 sec
Best model: ARIMA(1,3,1)(0,0,0)[0]
Total fit time: 59.502 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(1, 3, 1) Log Likelihood 8502.173
Date: Sun, 12 Dec 2021 AIC -16954.347
Time: 20:16:27 BIC -16837.076
Sample: 0 HQIC -16909.310
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 3.409e-14 2.62e-06 1.3e-08 1.000 -5.13e-06 5.13e-06
x2 1.816e-14 2.62e-06 6.93e-09 1.000 -5.13e-06 5.13e-06
x3 -2.039e-15 2.47e-06 -8.26e-10 1.000 -4.84e-06 4.84e-06
x4 1.0000 2.5e-06 4e+05 0.000 1.000 1.000
x5 2.488e-12 2.48e-06 1e-06 1.000 -4.86e-06 4.86e-06
x6 2.84e-15 6.48e-06 4.38e-10 1.000 -1.27e-05 1.27e-05
x7 3.618e-13 3.24e-06 1.12e-07 1.000 -6.36e-06 6.36e-06
x8 -0.0002 4.44e-06 -43.079 0.000 -0.000 -0.000
x9 2.93e-14 6.3e-08 4.65e-07 1.000 -1.23e-07 1.23e-07
x10 -2.843e-05 9.63e-06 -2.951 0.003 -4.73e-05 -9.55e-06
x11 0.0002 3.28e-06 53.981 0.000 0.000 0.000
x12 0.0001 5.63e-06 23.078 0.000 0.000 0.000
x13 -2.595e-14 2.63e-06 -9.88e-09 1.000 -5.15e-06 5.15e-06
x14 -6.497e-14 5.76e-06 -1.13e-08 1.000 -1.13e-05 1.13e-05
x15 1.699e-12 3.08e-06 5.51e-07 1.000 -6.04e-06 6.04e-06
x16 -3.969e-12 4.77e-06 -8.33e-07 1.000 -9.34e-06 9.34e-06
x17 5.452e-12 8.58e-07 6.35e-06 1.000 -1.68e-06 1.68e-06
x18 -3.68e-13 1.33e-05 -2.76e-08 1.000 -2.61e-05 2.61e-05
x19 -5.643e-13 4.61e-06 -1.22e-07 1.000 -9.03e-06 9.03e-06
x20 6.651e-14 4.9e-05 1.36e-09 1.000 -9.61e-05 9.61e-05
x21 -1.76e-16 8.47e-11 -2.08e-06 1.000 -1.66e-10 1.66e-10
x22 -7.82e-16 1.75e-10 -4.47e-06 1.000 -3.43e-10 3.43e-10
ar.L1 -0.2858 5.46e-08 -5.24e+06 0.000 -0.286 -0.286
ma.L1 -0.9143 5.59e-08 -1.63e+07 0.000 -0.914 -0.914
sigma2 1e-10 6.99e-11 1.430 0.153 -3.71e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 84.00 Jarque-Bera (JB): 4822228.07
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -6.05
Prob(H) (two-sided): 0.00 Kurtosis: 381.97
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.54e+27. Standard errors may be unstable.
ARIMA order: (1, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05045, saving model to LSTM8.h5
43/43 - 4s - loss: 1.4232 - val_loss: 0.0504 - lr: 0.0010 - 4s/epoch - 92ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.3899 - val_loss: 0.0529 - lr: 0.0010 - 217ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.3362 - val_loss: 0.0554 - lr: 0.0010 - 216ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.2675 - val_loss: 0.0585 - lr: 0.0010 - 235ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.2062 - val_loss: 0.0621 - lr: 0.0010 - 194ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1541 - val_loss: 0.0659 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1248 - val_loss: 0.0663 - lr: 1.0000e-04 - 220ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1205 - val_loss: 0.0667 - lr: 1.0000e-04 - 204ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1162 - val_loss: 0.0671 - lr: 1.0000e-04 - 209ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1121 - val_loss: 0.0675 - lr: 1.0000e-04 - 192ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1080 - val_loss: 0.0679 - lr: 1.0000e-04 - 209ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1054 - val_loss: 0.0680 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1050 - val_loss: 0.0680 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1046 - val_loss: 0.0681 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1042 - val_loss: 0.0681 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1038 - val_loss: 0.0682 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1034 - val_loss: 0.0682 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1030 - val_loss: 0.0682 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1026 - val_loss: 0.0683 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1022 - val_loss: 0.0683 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1018 - val_loss: 0.0684 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1014 - val_loss: 0.0684 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1010 - val_loss: 0.0685 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1006 - val_loss: 0.0685 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.1002 - val_loss: 0.0686 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0998 - val_loss: 0.0686 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0994 - val_loss: 0.0687 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0990 - val_loss: 0.0687 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0986 - val_loss: 0.0688 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0982 - val_loss: 0.0688 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0978 - val_loss: 0.0689 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0974 - val_loss: 0.0689 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0970 - val_loss: 0.0690 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0966 - val_loss: 0.0690 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0962 - val_loss: 0.0691 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0958 - val_loss: 0.0691 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0954 - val_loss: 0.0692 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0950 - val_loss: 0.0692 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0946 - val_loss: 0.0693 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0942 - val_loss: 0.0693 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0938 - val_loss: 0.0694 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0934 - val_loss: 0.0694 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0930 - val_loss: 0.0695 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0926 - val_loss: 0.0695 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0922 - val_loss: 0.0696 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0918 - val_loss: 0.0696 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0914 - val_loss: 0.0697 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0910 - val_loss: 0.0697 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0906 - val_loss: 0.0698 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0902 - val_loss: 0.0698 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05045
43/43 - 0s - loss: 1.0898 - val_loss: 0.0699 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.48% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 24.12057947793772
RMSE: 4.911270658183859
MAPE: 3.8711068958774497
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 36.227409389726965
RMSE: 6.018920948951479
MAPE: 4.70810831106621
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 47.6040579193988
RMSE: 6.8995694010132835
MAPE: 5.522605601178568
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 155.67335739769726
RMSE: 12.476912975479843
MAPE: 11.236116903479964
KAMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 19.84916238053282
RMSE: 4.45523987912355
MAPE: 3.572304554405335
MIDPOINT
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 21.813261360012138
RMSE: 4.67046693169025
MAPE: 3.6223152079979295
T3
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 67.60005597705626
RMSE: 8.221925320571591
MAPE: 6.604025072859764
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16412.930, Time=10.43 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14867.265, Time=6.40 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15902.803, Time=5.38 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15117.003, Time=7.68 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15669.652, Time=7.77 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-12676.374, Time=9.47 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16418.724, Time=9.19 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15107.772, Time=14.60 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15708.742, Time=15.23 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-13418.641, Time=23.92 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 110.098 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8234.362
Date: Sun, 12 Dec 2021 AIC -16418.724
Time: 20:21:29 BIC -16301.453
Sample: 0 HQIC -16373.687
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.784e-07 0.001 -0.000 1.000 -0.002 0.002
x2 -1.784e-07 0.001 -0.000 1.000 -0.003 0.003
x3 -1.794e-07 0.001 -0.000 1.000 -0.002 0.002
x4 1.0000 0.000 2616.546 0.000 0.999 1.001
x5 -1.704e-07 0.000 -0.000 1.000 -0.001 0.001
x6 -2.858e-07 3.31e-05 -0.009 0.993 -6.52e-05 6.46e-05
x7 -1.754e-07 0.001 -0.000 1.000 -0.002 0.002
x8 0.0007 0.000 3.091 0.002 0.000 0.001
x9 3.313e-08 0.000 9.39e-05 1.000 -0.001 0.001
x10 3.499e-06 0.000 0.022 0.983 -0.000 0.000
x11 -0.0003 0.000 -1.284 0.199 -0.001 0.000
x12 -6.362e-05 0.000 -0.260 0.795 -0.001 0.000
x13 -1.783e-07 0.000 -0.001 0.999 -0.000 0.000
x14 -5.244e-07 0.001 -0.001 0.999 -0.001 0.001
x15 -1.737e-07 0.000 -0.001 0.999 -0.000 0.000
x16 -2.583e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.74e-07 0.000 -0.001 0.999 -0.000 0.000
x18 -5.776e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -1.95e-07 0.000 -0.002 0.999 -0.000 0.000
x20 1.72e-07 0.000 0.001 0.999 -0.000 0.000
x21 -7.548e-10 0.001 -9.93e-07 1.000 -0.001 0.001
x22 -1.194e-08 0.000 -8.47e-05 1.000 -0.000 0.000
ma.L1 -1.3862 1.58e-05 -8.78e+04 0.000 -1.386 -1.386
ma.L2 0.4019 4.28e-05 9396.834 0.000 0.402 0.402
sigma2 1.265e-10 7.58e-11 1.669 0.095 -2.2e-11 2.75e-10
===================================================================================
Ljung-Box (L1) (Q): 66.79 Jarque-Bera (JB): 5900482.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -11.32
Prob(H) (two-sided): 0.00 Kurtosis: 421.81
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.07e+19. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05051, saving model to LSTM8.h5
90/90 - 4s - loss: 1.3468 - val_loss: 0.0505 - lr: 0.0010 - 4s/epoch - 43ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05051
90/90 - 0s - loss: 1.1420 - val_loss: 0.0570 - lr: 0.0010 - 402ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.9994 - val_loss: 0.0645 - lr: 0.0010 - 424ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.9081 - val_loss: 0.0732 - lr: 0.0010 - 387ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.8459 - val_loss: 0.0826 - lr: 0.0010 - 429ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.8000 - val_loss: 0.0924 - lr: 0.0010 - 406ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7769 - val_loss: 0.0934 - lr: 1.0000e-04 - 404ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7734 - val_loss: 0.0945 - lr: 1.0000e-04 - 408ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7699 - val_loss: 0.0956 - lr: 1.0000e-04 - 390ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7663 - val_loss: 0.0967 - lr: 1.0000e-04 - 439ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7628 - val_loss: 0.0980 - lr: 1.0000e-04 - 399ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7606 - val_loss: 0.0981 - lr: 1.0000e-05 - 420ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7602 - val_loss: 0.0982 - lr: 1.0000e-05 - 413ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7599 - val_loss: 0.0983 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7595 - val_loss: 0.0985 - lr: 1.0000e-05 - 431ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7591 - val_loss: 0.0986 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7587 - val_loss: 0.0988 - lr: 1.0000e-05 - 418ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7583 - val_loss: 0.0989 - lr: 1.0000e-05 - 406ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7579 - val_loss: 0.0991 - lr: 1.0000e-05 - 395ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7575 - val_loss: 0.0992 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7571 - val_loss: 0.0994 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7567 - val_loss: 0.0996 - lr: 1.0000e-05 - 422ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7563 - val_loss: 0.0997 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7559 - val_loss: 0.0999 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7555 - val_loss: 0.1001 - lr: 1.0000e-05 - 412ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7551 - val_loss: 0.1003 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7546 - val_loss: 0.1005 - lr: 1.0000e-05 - 419ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7542 - val_loss: 0.1006 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7538 - val_loss: 0.1008 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7534 - val_loss: 0.1010 - lr: 1.0000e-05 - 417ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7529 - val_loss: 0.1012 - lr: 1.0000e-05 - 384ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7525 - val_loss: 0.1014 - lr: 1.0000e-05 - 432ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7521 - val_loss: 0.1016 - lr: 1.0000e-05 - 406ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7517 - val_loss: 0.1018 - lr: 1.0000e-05 - 419ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7512 - val_loss: 0.1020 - lr: 1.0000e-05 - 405ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7508 - val_loss: 0.1022 - lr: 1.0000e-05 - 386ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7504 - val_loss: 0.1025 - lr: 1.0000e-05 - 417ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7500 - val_loss: 0.1027 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7495 - val_loss: 0.1029 - lr: 1.0000e-05 - 405ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7491 - val_loss: 0.1031 - lr: 1.0000e-05 - 405ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7487 - val_loss: 0.1033 - lr: 1.0000e-05 - 382ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7482 - val_loss: 0.1035 - lr: 1.0000e-05 - 424ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7478 - val_loss: 0.1038 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7474 - val_loss: 0.1040 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7470 - val_loss: 0.1042 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7465 - val_loss: 0.1045 - lr: 1.0000e-05 - 424ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7461 - val_loss: 0.1047 - lr: 1.0000e-05 - 434ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7457 - val_loss: 0.1049 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7453 - val_loss: 0.1052 - lr: 1.0000e-05 - 422ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7448 - val_loss: 0.1054 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05051
90/90 - 0s - loss: 0.7444 - val_loss: 0.1056 - lr: 1.0000e-05 - 402ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA Prediction vs Close: 54.48% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 24.12057947793772 RMSE: 4.911270658183859 MAPE: 3.8711068958774497 EMA Prediction vs Close: 53.73% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 36.227409389726965 RMSE: 6.018920948951479 MAPE: 4.70810831106621 WMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 45.9% Accuracy MSE: 47.6040579193988 RMSE: 6.8995694010132835 MAPE: 5.522605601178568 DEMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 155.67335739769726 RMSE: 12.476912975479843 MAPE: 11.236116903479964 KAMA Prediction vs Close: 55.6% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 19.84916238053282 RMSE: 4.45523987912355 MAPE: 3.572304554405335 MIDPOINT Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 46.27% Accuracy MSE: 21.813261360012138 RMSE: 4.67046693169025 MAPE: 3.6223152079979295 T3 Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 67.60005597705626 RMSE: 8.221925320571591 MAPE: 6.604025072859764 TEMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 49.63% Accuracy MSE: 24.243134917893084 RMSE: 4.923731808079425 MAPE: 4.330583242877685 Runtime: mins: 45.860193890400005
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment8.png to Experiment8 (1).png
img = cv2.imread('Experiment8.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
with open('simulation8_data.json') as json_file:
simulation8 = json.load(json_file)
fileimg = 'Experiment8'
for i in range(len(list(simulation8.keys()))):
SIM = list(simulation8.keys())[i]
plot_train(simulation8,SIM)
plot_test(simulation8,SIM)
----- Train RMSE for SMA ----- 20.144952693145125 ----- Train_MSE_LSTM for SMA ----- 405.81911900905504 ----- Train MAE LSTM for SMA ----- 20.137185568856722
----- Test RMSE for SMA----- 4.911270658183859 ----- Test_MSE_LSTM for SMA----- 24.12057947793772 ----- Test_MAE_LSTM for SMA----- 3.8711068958774497
----- Train RMSE for EMA ----- 24.237983942165165 ----- Train_MSE_LSTM for EMA ----- 587.4798655806565 ----- Train MAE LSTM for EMA ----- 24.233408446359164
----- Test RMSE for EMA----- 6.018920948951479 ----- Test_MSE_LSTM for EMA----- 36.227409389726965 ----- Test_MAE_LSTM for EMA----- 4.70810831106621
----- Train RMSE for WMA ----- 24.839096992968393 ----- Train_MSE_LSTM for WMA ----- 616.9807394260914 ----- Train MAE LSTM for WMA ----- 24.837005199772296
----- Test RMSE for WMA----- 6.8995694010132835 ----- Test_MSE_LSTM for WMA----- 47.6040579193988 ----- Test_MAE_LSTM for WMA----- 5.522605601178568
----- Train RMSE for DEMA ----- 26.637746302738936 ----- Train_MSE_LSTM for DEMA ----- 709.5695280890818 ----- Train MAE LSTM for DEMA ----- 26.62872602916
----- Test RMSE for DEMA----- 12.476912975479843 ----- Test_MSE_LSTM for DEMA----- 155.67335739769726 ----- Test_MAE_LSTM for DEMA----- 11.236116903479964
----- Train RMSE for KAMA ----- 20.429213713281705 ----- Train_MSE_LSTM for KAMA ----- 417.35277294293724 ----- Train MAE LSTM for KAMA ----- 20.425471756717947
----- Test RMSE for KAMA----- 4.45523987912355 ----- Test_MSE_LSTM for KAMA----- 19.84916238053282 ----- Test_MAE_LSTM for KAMA----- 3.572304554405335
----- Train RMSE for MIDPOINT ----- 16.856157067120407 ----- Train_MSE_LSTM for MIDPOINT ----- 284.1300310714332 ----- Train MAE LSTM for MIDPOINT ----- 16.801196841910333
----- Test RMSE for MIDPOINT----- 4.67046693169025 ----- Test_MSE_LSTM for MIDPOINT----- 21.813261360012138 ----- Test_MAE_LSTM for MIDPOINT----- 3.6223152079979295
----- Train RMSE for T3 ----- 23.819539505216678 ----- Train_MSE_LSTM for T3 ----- 567.370462240578 ----- Train MAE LSTM for T3 ----- 23.812523622323972
----- Test RMSE for T3----- 8.221925320571591 ----- Test_MSE_LSTM for T3----- 67.60005597705626 ----- Test_MAE_LSTM for T3----- 6.604025072859764
----- Train RMSE for TEMA ----- 20.0322994797993 ----- Train_MSE_LSTM for TEMA ----- 401.2930224483673 ----- Train MAE LSTM for TEMA ----- 19.998681748267447
----- Test RMSE for TEMA----- 4.923731808079425 ----- Test_MSE_LSTM for TEMA----- 24.243134917893084 ----- Test_MAE_LSTM for TEMA----- 4.330583242877685
import json
with open('simulation1_data.json') as json_file:
simulation1 = json.load(json_file)
with open('simulation2_data.json') as json_file:
simulation2 = json.load(json_file)
with open('simulation3_data.json') as json_file:
simulation3 = json.load(json_file)
with open('simulation4_data.json') as json_file:
simulation4 = json.load(json_file)
with open('simulation5_data.json') as json_file:
simulation5 = json.load(json_file)
with open('simulation6_data.json') as json_file:
simulation6 = json.load(json_file)
with open('simulation7_data.json') as json_file:
simulation7 = json.load(json_file)
with open('simulation8_data.json') as json_file:
simulation8 = json.load(json_file)
text = 'Stock with Covid Trends '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
for ma in simulation.keys():
print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Covid Trends Experiment 1 for MA : SMA the MSE is: 29.531169515594907 Stock with Covid Trends Experiment 1 for MA : SMA the RMSE is: 5.434258874547192 Stock with Covid Trends Experiment 1 for MA : SMA the MAE is: 4.511922179897357 Stock with Covid Trends Experiment 1 for MA : EMA the MSE is: 44.2948843329178 Stock with Covid Trends Experiment 1 for MA : EMA the RMSE is: 6.655440205795392 Stock with Covid Trends Experiment 1 for MA : EMA the MAE is: 5.1903345685841265 Stock with Covid Trends Experiment 1 for MA : WMA the MSE is: 34.81241095672678 Stock with Covid Trends Experiment 1 for MA : WMA the RMSE is: 5.900204314829002 Stock with Covid Trends Experiment 1 for MA : WMA the MAE is: 4.770935413189914 Stock with Covid Trends Experiment 1 for MA : DEMA the MSE is: 52.107642174945944 Stock with Covid Trends Experiment 1 for MA : DEMA the RMSE is: 7.2185623343534235 Stock with Covid Trends Experiment 1 for MA : DEMA the MAE is: 5.72607728989529 Stock with Covid Trends Experiment 1 for MA : KAMA the MSE is: 101.2314633840329 Stock with Covid Trends Experiment 1 for MA : KAMA the RMSE is: 10.06138476473457 Stock with Covid Trends Experiment 1 for MA : KAMA the MAE is: 7.671150891933135 Stock with Covid Trends Experiment 1 for MA : MIDPOINT the MSE is: 120.91184599154492 Stock with Covid Trends Experiment 1 for MA : MIDPOINT the RMSE is: 10.995992269529154 Stock with Covid Trends Experiment 1 for MA : MIDPOINT the MAE is: 9.137686493675425 Stock with Covid Trends Experiment 1 for MA : T3 the MSE is: 41.51394815576297 Stock with Covid Trends Experiment 1 for MA : T3 the RMSE is: 6.443131859256255 Stock with Covid Trends Experiment 1 for MA : T3 the MAE is: 5.507945991108928 Stock with Covid Trends Experiment 1 for MA : TEMA the MSE is: 72.3302204365722 Stock with Covid Trends Experiment 1 for MA : TEMA the RMSE is: 8.504717540081634 Stock with Covid Trends Experiment 1 for MA : TEMA the MAE is: 7.413730210152267 Stock with Covid Trends Experiment 2 for MA : SMA the MSE is: 63.041023819643854 Stock with Covid Trends Experiment 2 for MA : SMA the RMSE is: 7.939837770360541 Stock with Covid Trends Experiment 2 for MA : SMA the MAE is: 6.449589599500938 Stock with Covid Trends Experiment 2 for MA : EMA the MSE is: 63.66877348603133 Stock with Covid Trends Experiment 2 for MA : EMA the RMSE is: 7.979271488427457 Stock with Covid Trends Experiment 2 for MA : EMA the MAE is: 6.567170782771208 Stock with Covid Trends Experiment 2 for MA : WMA the MSE is: 74.84193590201411 Stock with Covid Trends Experiment 2 for MA : WMA the RMSE is: 8.65112338959595 Stock with Covid Trends Experiment 2 for MA : WMA the MAE is: 6.92726320779593 Stock with Covid Trends Experiment 2 for MA : DEMA the MSE is: 124.07774757087437 Stock with Covid Trends Experiment 2 for MA : DEMA the RMSE is: 11.139019147612341 Stock with Covid Trends Experiment 2 for MA : DEMA the MAE is: 9.962964959911572 Stock with Covid Trends Experiment 2 for MA : KAMA the MSE is: 64.92528911521055 Stock with Covid Trends Experiment 2 for MA : KAMA the RMSE is: 8.057623043752454 Stock with Covid Trends Experiment 2 for MA : KAMA the MAE is: 6.682416615913553 Stock with Covid Trends Experiment 2 for MA : MIDPOINT the MSE is: 68.19255604013144 Stock with Covid Trends Experiment 2 for MA : MIDPOINT the RMSE is: 8.25787842246006 Stock with Covid Trends Experiment 2 for MA : MIDPOINT the MAE is: 6.72839330666561 Stock with Covid Trends Experiment 2 for MA : T3 the MSE is: 149.0300312328299 Stock with Covid Trends Experiment 2 for MA : T3 the RMSE is: 12.207785680983669 Stock with Covid Trends Experiment 2 for MA : T3 the MAE is: 10.094975187792123 Stock with Covid Trends Experiment 2 for MA : TEMA the MSE is: 71.80641753112648 Stock with Covid Trends Experiment 2 for MA : TEMA the RMSE is: 8.473866740227066 Stock with Covid Trends Experiment 2 for MA : TEMA the MAE is: 7.512371017185029 Stock with Covid Trends Experiment 3 for MA : SMA the MSE is: 123.96893050522607 Stock with Covid Trends Experiment 3 for MA : SMA the RMSE is: 11.134133576764116 Stock with Covid Trends Experiment 3 for MA : SMA the MAE is: 9.602398807260117 Stock with Covid Trends Experiment 3 for MA : EMA the MSE is: 63.919262026708296 Stock with Covid Trends Experiment 3 for MA : EMA the RMSE is: 7.994952284204596 Stock with Covid Trends Experiment 3 for MA : EMA the MAE is: 6.479287961204322 Stock with Covid Trends Experiment 3 for MA : WMA the MSE is: 24.651058301828286 Stock with Covid Trends Experiment 3 for MA : WMA the RMSE is: 4.9649832126431495 Stock with Covid Trends Experiment 3 for MA : WMA the MAE is: 3.9308905500983484 Stock with Covid Trends Experiment 3 for MA : DEMA the MSE is: 156.8635759091866 Stock with Covid Trends Experiment 3 for MA : DEMA the RMSE is: 12.524518989134338 Stock with Covid Trends Experiment 3 for MA : DEMA the MAE is: 11.387412907589542 Stock with Covid Trends Experiment 3 for MA : KAMA the MSE is: 59.19746610115158 Stock with Covid Trends Experiment 3 for MA : KAMA the RMSE is: 7.69398895899595 Stock with Covid Trends Experiment 3 for MA : KAMA the MAE is: 6.776737847872761 Stock with Covid Trends Experiment 3 for MA : MIDPOINT the MSE is: 46.490023595118274 Stock with Covid Trends Experiment 3 for MA : MIDPOINT the RMSE is: 6.818359303756166 Stock with Covid Trends Experiment 3 for MA : MIDPOINT the MAE is: 5.538801606657957 Stock with Covid Trends Experiment 3 for MA : T3 the MSE is: 57.75776139981352 Stock with Covid Trends Experiment 3 for MA : T3 the RMSE is: 7.59985272224492 Stock with Covid Trends Experiment 3 for MA : T3 the MAE is: 6.172107202063374 Stock with Covid Trends Experiment 3 for MA : TEMA the MSE is: 61.81638170069383 Stock with Covid Trends Experiment 3 for MA : TEMA the RMSE is: 7.862339454684835 Stock with Covid Trends Experiment 3 for MA : TEMA the MAE is: 7.157520441443416 Stock with Covid Trends Experiment 4 for MA : SMA the MSE is: 22.0961825771905 Stock with Covid Trends Experiment 4 for MA : SMA the RMSE is: 4.700657674963207 Stock with Covid Trends Experiment 4 for MA : SMA the MAE is: 3.7488296078488137 Stock with Covid Trends Experiment 4 for MA : EMA the MSE is: 36.69312385194829 Stock with Covid Trends Experiment 4 for MA : EMA the RMSE is: 6.057484944426053 Stock with Covid Trends Experiment 4 for MA : EMA the MAE is: 4.755707959713801 Stock with Covid Trends Experiment 4 for MA : WMA the MSE is: 61.47074835668693 Stock with Covid Trends Experiment 4 for MA : WMA the RMSE is: 7.8403283321992925 Stock with Covid Trends Experiment 4 for MA : WMA the MAE is: 6.468176158698829 Stock with Covid Trends Experiment 4 for MA : DEMA the MSE is: 114.21230424130383 Stock with Covid Trends Experiment 4 for MA : DEMA the RMSE is: 10.687015684525958 Stock with Covid Trends Experiment 4 for MA : DEMA the MAE is: 9.305044543155903 Stock with Covid Trends Experiment 4 for MA : KAMA the MSE is: 21.57120658320832 Stock with Covid Trends Experiment 4 for MA : KAMA the RMSE is: 4.6444813040002995 Stock with Covid Trends Experiment 4 for MA : KAMA the MAE is: 3.6837316829247877 Stock with Covid Trends Experiment 4 for MA : MIDPOINT the MSE is: 17.38125304406819 Stock with Covid Trends Experiment 4 for MA : MIDPOINT the RMSE is: 4.169082997982673 Stock with Covid Trends Experiment 4 for MA : MIDPOINT the MAE is: 3.3993243705608664 Stock with Covid Trends Experiment 4 for MA : T3 the MSE is: 60.321913944220896 Stock with Covid Trends Experiment 4 for MA : T3 the RMSE is: 7.766718351029661 Stock with Covid Trends Experiment 4 for MA : T3 the MAE is: 6.200911576902634 Stock with Covid Trends Experiment 4 for MA : TEMA the MSE is: 25.760985062606874 Stock with Covid Trends Experiment 4 for MA : TEMA the RMSE is: 5.075528057513511 Stock with Covid Trends Experiment 4 for MA : TEMA the MAE is: 4.549137795705406 Stock with Covid Trends Experiment 5 for MA : SMA the MSE is: 36.387272258848725 Stock with Covid Trends Experiment 5 for MA : SMA the RMSE is: 6.032186358100081 Stock with Covid Trends Experiment 5 for MA : SMA the MAE is: 4.990569235256131 Stock with Covid Trends Experiment 5 for MA : EMA the MSE is: 72.47565418845511 Stock with Covid Trends Experiment 5 for MA : EMA the RMSE is: 8.513263427643661 Stock with Covid Trends Experiment 5 for MA : EMA the MAE is: 6.94585827976211 Stock with Covid Trends Experiment 5 for MA : WMA the MSE is: 29.73090246364654 Stock with Covid Trends Experiment 5 for MA : WMA the RMSE is: 5.452605107987057 Stock with Covid Trends Experiment 5 for MA : WMA the MAE is: 4.390044818690696 Stock with Covid Trends Experiment 5 for MA : DEMA the MSE is: 39.142904723518775 Stock with Covid Trends Experiment 5 for MA : DEMA the RMSE is: 6.256429071244936 Stock with Covid Trends Experiment 5 for MA : DEMA the MAE is: 4.920393911559133 Stock with Covid Trends Experiment 5 for MA : KAMA the MSE is: 52.56428057408519 Stock with Covid Trends Experiment 5 for MA : KAMA the RMSE is: 7.25012279717283 Stock with Covid Trends Experiment 5 for MA : KAMA the MAE is: 6.170488218753182 Stock with Covid Trends Experiment 5 for MA : MIDPOINT the MSE is: 44.21016710593271 Stock with Covid Trends Experiment 5 for MA : MIDPOINT the RMSE is: 6.649072650071791 Stock with Covid Trends Experiment 5 for MA : MIDPOINT the MAE is: 5.476790088019583 Stock with Covid Trends Experiment 5 for MA : T3 the MSE is: 64.94642382025489 Stock with Covid Trends Experiment 5 for MA : T3 the RMSE is: 8.058934409725326 Stock with Covid Trends Experiment 5 for MA : T3 the MAE is: 6.415762745110697 Stock with Covid Trends Experiment 5 for MA : TEMA the MSE is: 29.21500753639505 Stock with Covid Trends Experiment 5 for MA : TEMA the RMSE is: 5.4050908906691895 Stock with Covid Trends Experiment 5 for MA : TEMA the MAE is: 4.44965723634719 Stock with Covid Trends Experiment 6 for MA : SMA the MSE is: 60.485697397526344 Stock with Covid Trends Experiment 6 for MA : SMA the RMSE is: 7.777255132598284 Stock with Covid Trends Experiment 6 for MA : SMA the MAE is: 6.358945125308518 Stock with Covid Trends Experiment 6 for MA : EMA the MSE is: 58.20305175219876 Stock with Covid Trends Experiment 6 for MA : EMA the RMSE is: 7.629092459277103 Stock with Covid Trends Experiment 6 for MA : EMA the MAE is: 6.21442849961768 Stock with Covid Trends Experiment 6 for MA : WMA the MSE is: 70.88350276857014 Stock with Covid Trends Experiment 6 for MA : WMA the RMSE is: 8.419234096316014 Stock with Covid Trends Experiment 6 for MA : WMA the MAE is: 6.6789569931753 Stock with Covid Trends Experiment 6 for MA : DEMA the MSE is: 119.53246002468391 Stock with Covid Trends Experiment 6 for MA : DEMA the RMSE is: 10.933090140700566 Stock with Covid Trends Experiment 6 for MA : DEMA the MAE is: 9.747683697911842 Stock with Covid Trends Experiment 6 for MA : KAMA the MSE is: 61.13308833987969 Stock with Covid Trends Experiment 6 for MA : KAMA the RMSE is: 7.818765141624327 Stock with Covid Trends Experiment 6 for MA : KAMA the MAE is: 6.461585168646619 Stock with Covid Trends Experiment 6 for MA : MIDPOINT the MSE is: 61.5384692642518 Stock with Covid Trends Experiment 6 for MA : MIDPOINT the RMSE is: 7.8446458979517875 Stock with Covid Trends Experiment 6 for MA : MIDPOINT the MAE is: 6.407298993379305 Stock with Covid Trends Experiment 6 for MA : T3 the MSE is: 163.02597008234568 Stock with Covid Trends Experiment 6 for MA : T3 the RMSE is: 12.768162361214932 Stock with Covid Trends Experiment 6 for MA : T3 the MAE is: 10.498544939048504 Stock with Covid Trends Experiment 6 for MA : TEMA the MSE is: 66.14227466119469 Stock with Covid Trends Experiment 6 for MA : TEMA the RMSE is: 8.132790090811067 Stock with Covid Trends Experiment 6 for MA : TEMA the MAE is: 7.1170786919128775 Stock with Covid Trends Experiment 7 for MA : SMA the MSE is: 23.38002191723926 Stock with Covid Trends Experiment 7 for MA : SMA the RMSE is: 4.835289227878645 Stock with Covid Trends Experiment 7 for MA : SMA the MAE is: 3.8675720673818827 Stock with Covid Trends Experiment 7 for MA : EMA the MSE is: 35.056668726825066 Stock with Covid Trends Experiment 7 for MA : EMA the RMSE is: 5.920867227596399 Stock with Covid Trends Experiment 7 for MA : EMA the MAE is: 4.704877912816018 Stock with Covid Trends Experiment 7 for MA : WMA the MSE is: 44.87192646385527 Stock with Covid Trends Experiment 7 for MA : WMA the RMSE is: 6.698651092858566 Stock with Covid Trends Experiment 7 for MA : WMA the MAE is: 5.33068935026581 Stock with Covid Trends Experiment 7 for MA : DEMA the MSE is: 53.079656203261706 Stock with Covid Trends Experiment 7 for MA : DEMA the RMSE is: 7.285578645739933 Stock with Covid Trends Experiment 7 for MA : DEMA the MAE is: 5.726487515550782 Stock with Covid Trends Experiment 7 for MA : KAMA the MSE is: 30.678794294842323 Stock with Covid Trends Experiment 7 for MA : KAMA the RMSE is: 5.5388441298561855 Stock with Covid Trends Experiment 7 for MA : KAMA the MAE is: 4.336649130448084 Stock with Covid Trends Experiment 7 for MA : MIDPOINT the MSE is: 19.38951232132957 Stock with Covid Trends Experiment 7 for MA : MIDPOINT the RMSE is: 4.4033523957695655 Stock with Covid Trends Experiment 7 for MA : MIDPOINT the MAE is: 3.5042510250586574 Stock with Covid Trends Experiment 7 for MA : T3 the MSE is: 90.72292612095576 Stock with Covid Trends Experiment 7 for MA : T3 the RMSE is: 9.524858325505727 Stock with Covid Trends Experiment 7 for MA : T3 the MAE is: 7.398189805001564 Stock with Covid Trends Experiment 7 for MA : TEMA the MSE is: 46.925505559638836 Stock with Covid Trends Experiment 7 for MA : TEMA the RMSE is: 6.850219380402268 Stock with Covid Trends Experiment 7 for MA : TEMA the MAE is: 5.67920042842036 Stock with Covid Trends Experiment 8 for MA : SMA the MSE is: 24.12057947793772 Stock with Covid Trends Experiment 8 for MA : SMA the RMSE is: 4.911270658183859 Stock with Covid Trends Experiment 8 for MA : SMA the MAE is: 3.8711068958774497 Stock with Covid Trends Experiment 8 for MA : EMA the MSE is: 36.227409389726965 Stock with Covid Trends Experiment 8 for MA : EMA the RMSE is: 6.018920948951479 Stock with Covid Trends Experiment 8 for MA : EMA the MAE is: 4.70810831106621 Stock with Covid Trends Experiment 8 for MA : WMA the MSE is: 47.6040579193988 Stock with Covid Trends Experiment 8 for MA : WMA the RMSE is: 6.8995694010132835 Stock with Covid Trends Experiment 8 for MA : WMA the MAE is: 5.522605601178568 Stock with Covid Trends Experiment 8 for MA : DEMA the MSE is: 155.67335739769726 Stock with Covid Trends Experiment 8 for MA : DEMA the RMSE is: 12.476912975479843 Stock with Covid Trends Experiment 8 for MA : DEMA the MAE is: 11.236116903479964 Stock with Covid Trends Experiment 8 for MA : KAMA the MSE is: 19.84916238053282 Stock with Covid Trends Experiment 8 for MA : KAMA the RMSE is: 4.45523987912355 Stock with Covid Trends Experiment 8 for MA : KAMA the MAE is: 3.572304554405335 Stock with Covid Trends Experiment 8 for MA : MIDPOINT the MSE is: 21.813261360012138 Stock with Covid Trends Experiment 8 for MA : MIDPOINT the RMSE is: 4.67046693169025 Stock with Covid Trends Experiment 8 for MA : MIDPOINT the MAE is: 3.6223152079979295 Stock with Covid Trends Experiment 8 for MA : T3 the MSE is: 67.60005597705626 Stock with Covid Trends Experiment 8 for MA : T3 the RMSE is: 8.221925320571591 Stock with Covid Trends Experiment 8 for MA : T3 the MAE is: 6.604025072859764 Stock with Covid Trends Experiment 8 for MA : TEMA the MSE is: 24.243134917893084 Stock with Covid Trends Experiment 8 for MA : TEMA the RMSE is: 4.923731808079425 Stock with Covid Trends Experiment 8 for MA : TEMA the MAE is: 4.330583242877685
text = 'Stock with Covid Trends '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
for ma in simulation.keys():
# print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
for ma in simulation.keys():
print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
for ma in simulation.keys():
# print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Covid Trends Experiment 1 for MA : SMA the RMSE is: 5.434258874547192 Stock with Covid Trends Experiment 1 for MA : EMA the RMSE is: 6.655440205795392 Stock with Covid Trends Experiment 1 for MA : WMA the RMSE is: 5.900204314829002 Stock with Covid Trends Experiment 1 for MA : DEMA the RMSE is: 7.2185623343534235 Stock with Covid Trends Experiment 1 for MA : KAMA the RMSE is: 10.06138476473457 Stock with Covid Trends Experiment 1 for MA : MIDPOINT the RMSE is: 10.995992269529154 Stock with Covid Trends Experiment 1 for MA : T3 the RMSE is: 6.443131859256255 Stock with Covid Trends Experiment 1 for MA : TEMA the RMSE is: 8.504717540081634 Stock with Covid Trends Experiment 1 for MA : SMA the MSE is: 29.531169515594907 Stock with Covid Trends Experiment 1 for MA : EMA the MSE is: 44.2948843329178 Stock with Covid Trends Experiment 1 for MA : WMA the MSE is: 34.81241095672678 Stock with Covid Trends Experiment 1 for MA : DEMA the MSE is: 52.107642174945944 Stock with Covid Trends Experiment 1 for MA : KAMA the MSE is: 101.2314633840329 Stock with Covid Trends Experiment 1 for MA : MIDPOINT the MSE is: 120.91184599154492 Stock with Covid Trends Experiment 1 for MA : T3 the MSE is: 41.51394815576297 Stock with Covid Trends Experiment 1 for MA : TEMA the MSE is: 72.3302204365722 Stock with Covid Trends Experiment 1 for MA : SMA the MAE is: 4.511922179897357 Stock with Covid Trends Experiment 1 for MA : EMA the MAE is: 5.1903345685841265 Stock with Covid Trends Experiment 1 for MA : WMA the MAE is: 4.770935413189914 Stock with Covid Trends Experiment 1 for MA : DEMA the MAE is: 5.72607728989529 Stock with Covid Trends Experiment 1 for MA : KAMA the MAE is: 7.671150891933135 Stock with Covid Trends Experiment 1 for MA : MIDPOINT the MAE is: 9.137686493675425 Stock with Covid Trends Experiment 1 for MA : T3 the MAE is: 5.507945991108928 Stock with Covid Trends Experiment 1 for MA : TEMA the MAE is: 7.413730210152267 Stock with Covid Trends Experiment 2 for MA : SMA the RMSE is: 7.939837770360541 Stock with Covid Trends Experiment 2 for MA : EMA the RMSE is: 7.979271488427457 Stock with Covid Trends Experiment 2 for MA : WMA the RMSE is: 8.65112338959595 Stock with Covid Trends Experiment 2 for MA : DEMA the RMSE is: 11.139019147612341 Stock with Covid Trends Experiment 2 for MA : KAMA the RMSE is: 8.057623043752454 Stock with Covid Trends Experiment 2 for MA : MIDPOINT the RMSE is: 8.25787842246006 Stock with Covid Trends Experiment 2 for MA : T3 the RMSE is: 12.207785680983669 Stock with Covid Trends Experiment 2 for MA : TEMA the RMSE is: 8.473866740227066 Stock with Covid Trends Experiment 2 for MA : SMA the MSE is: 63.041023819643854 Stock with Covid Trends Experiment 2 for MA : EMA the MSE is: 63.66877348603133 Stock with Covid Trends Experiment 2 for MA : WMA the MSE is: 74.84193590201411 Stock with Covid Trends Experiment 2 for MA : DEMA the MSE is: 124.07774757087437 Stock with Covid Trends Experiment 2 for MA : KAMA the MSE is: 64.92528911521055 Stock with Covid Trends Experiment 2 for MA : MIDPOINT the MSE is: 68.19255604013144 Stock with Covid Trends Experiment 2 for MA : T3 the MSE is: 149.0300312328299 Stock with Covid Trends Experiment 2 for MA : TEMA the MSE is: 71.80641753112648 Stock with Covid Trends Experiment 2 for MA : SMA the MAE is: 6.449589599500938 Stock with Covid Trends Experiment 2 for MA : EMA the MAE is: 6.567170782771208 Stock with Covid Trends Experiment 2 for MA : WMA the MAE is: 6.92726320779593 Stock with Covid Trends Experiment 2 for MA : DEMA the MAE is: 9.962964959911572 Stock with Covid Trends Experiment 2 for MA : KAMA the MAE is: 6.682416615913553 Stock with Covid Trends Experiment 2 for MA : MIDPOINT the MAE is: 6.72839330666561 Stock with Covid Trends Experiment 2 for MA : T3 the MAE is: 10.094975187792123 Stock with Covid Trends Experiment 2 for MA : TEMA the MAE is: 7.512371017185029 Stock with Covid Trends Experiment 3 for MA : SMA the RMSE is: 11.134133576764116 Stock with Covid Trends Experiment 3 for MA : EMA the RMSE is: 7.994952284204596 Stock with Covid Trends Experiment 3 for MA : WMA the RMSE is: 4.9649832126431495 Stock with Covid Trends Experiment 3 for MA : DEMA the RMSE is: 12.524518989134338 Stock with Covid Trends Experiment 3 for MA : KAMA the RMSE is: 7.69398895899595 Stock with Covid Trends Experiment 3 for MA : MIDPOINT the RMSE is: 6.818359303756166 Stock with Covid Trends Experiment 3 for MA : T3 the RMSE is: 7.59985272224492 Stock with Covid Trends Experiment 3 for MA : TEMA the RMSE is: 7.862339454684835 Stock with Covid Trends Experiment 3 for MA : SMA the MSE is: 123.96893050522607 Stock with Covid Trends Experiment 3 for MA : EMA the MSE is: 63.919262026708296 Stock with Covid Trends Experiment 3 for MA : WMA the MSE is: 24.651058301828286 Stock with Covid Trends Experiment 3 for MA : DEMA the MSE is: 156.8635759091866 Stock with Covid Trends Experiment 3 for MA : KAMA the MSE is: 59.19746610115158 Stock with Covid Trends Experiment 3 for MA : MIDPOINT the MSE is: 46.490023595118274 Stock with Covid Trends Experiment 3 for MA : T3 the MSE is: 57.75776139981352 Stock with Covid Trends Experiment 3 for MA : TEMA the MSE is: 61.81638170069383 Stock with Covid Trends Experiment 3 for MA : SMA the MAE is: 9.602398807260117 Stock with Covid Trends Experiment 3 for MA : EMA the MAE is: 6.479287961204322 Stock with Covid Trends Experiment 3 for MA : WMA the MAE is: 3.9308905500983484 Stock with Covid Trends Experiment 3 for MA : DEMA the MAE is: 11.387412907589542 Stock with Covid Trends Experiment 3 for MA : KAMA the MAE is: 6.776737847872761 Stock with Covid Trends Experiment 3 for MA : MIDPOINT the MAE is: 5.538801606657957 Stock with Covid Trends Experiment 3 for MA : T3 the MAE is: 6.172107202063374 Stock with Covid Trends Experiment 3 for MA : TEMA the MAE is: 7.157520441443416 Stock with Covid Trends Experiment 4 for MA : SMA the RMSE is: 4.700657674963207 Stock with Covid Trends Experiment 4 for MA : EMA the RMSE is: 6.057484944426053 Stock with Covid Trends Experiment 4 for MA : WMA the RMSE is: 7.8403283321992925 Stock with Covid Trends Experiment 4 for MA : DEMA the RMSE is: 10.687015684525958 Stock with Covid Trends Experiment 4 for MA : KAMA the RMSE is: 4.6444813040002995 Stock with Covid Trends Experiment 4 for MA : MIDPOINT the RMSE is: 4.169082997982673 Stock with Covid Trends Experiment 4 for MA : T3 the RMSE is: 7.766718351029661 Stock with Covid Trends Experiment 4 for MA : TEMA the RMSE is: 5.075528057513511 Stock with Covid Trends Experiment 4 for MA : SMA the MSE is: 22.0961825771905 Stock with Covid Trends Experiment 4 for MA : EMA the MSE is: 36.69312385194829 Stock with Covid Trends Experiment 4 for MA : WMA the MSE is: 61.47074835668693 Stock with Covid Trends Experiment 4 for MA : DEMA the MSE is: 114.21230424130383 Stock with Covid Trends Experiment 4 for MA : KAMA the MSE is: 21.57120658320832 Stock with Covid Trends Experiment 4 for MA : MIDPOINT the MSE is: 17.38125304406819 Stock with Covid Trends Experiment 4 for MA : T3 the MSE is: 60.321913944220896 Stock with Covid Trends Experiment 4 for MA : TEMA the MSE is: 25.760985062606874 Stock with Covid Trends Experiment 4 for MA : SMA the MAE is: 3.7488296078488137 Stock with Covid Trends Experiment 4 for MA : EMA the MAE is: 4.755707959713801 Stock with Covid Trends Experiment 4 for MA : WMA the MAE is: 6.468176158698829 Stock with Covid Trends Experiment 4 for MA : DEMA the MAE is: 9.305044543155903 Stock with Covid Trends Experiment 4 for MA : KAMA the MAE is: 3.6837316829247877 Stock with Covid Trends Experiment 4 for MA : MIDPOINT the MAE is: 3.3993243705608664 Stock with Covid Trends Experiment 4 for MA : T3 the MAE is: 6.200911576902634 Stock with Covid Trends Experiment 4 for MA : TEMA the MAE is: 4.549137795705406 Stock with Covid Trends Experiment 5 for MA : SMA the RMSE is: 6.032186358100081 Stock with Covid Trends Experiment 5 for MA : EMA the RMSE is: 8.513263427643661 Stock with Covid Trends Experiment 5 for MA : WMA the RMSE is: 5.452605107987057 Stock with Covid Trends Experiment 5 for MA : DEMA the RMSE is: 6.256429071244936 Stock with Covid Trends Experiment 5 for MA : KAMA the RMSE is: 7.25012279717283 Stock with Covid Trends Experiment 5 for MA : MIDPOINT the RMSE is: 6.649072650071791 Stock with Covid Trends Experiment 5 for MA : T3 the RMSE is: 8.058934409725326 Stock with Covid Trends Experiment 5 for MA : TEMA the RMSE is: 5.4050908906691895 Stock with Covid Trends Experiment 5 for MA : SMA the MSE is: 36.387272258848725 Stock with Covid Trends Experiment 5 for MA : EMA the MSE is: 72.47565418845511 Stock with Covid Trends Experiment 5 for MA : WMA the MSE is: 29.73090246364654 Stock with Covid Trends Experiment 5 for MA : DEMA the MSE is: 39.142904723518775 Stock with Covid Trends Experiment 5 for MA : KAMA the MSE is: 52.56428057408519 Stock with Covid Trends Experiment 5 for MA : MIDPOINT the MSE is: 44.21016710593271 Stock with Covid Trends Experiment 5 for MA : T3 the MSE is: 64.94642382025489 Stock with Covid Trends Experiment 5 for MA : TEMA the MSE is: 29.21500753639505 Stock with Covid Trends Experiment 5 for MA : SMA the MAE is: 4.990569235256131 Stock with Covid Trends Experiment 5 for MA : EMA the MAE is: 6.94585827976211 Stock with Covid Trends Experiment 5 for MA : WMA the MAE is: 4.390044818690696 Stock with Covid Trends Experiment 5 for MA : DEMA the MAE is: 4.920393911559133 Stock with Covid Trends Experiment 5 for MA : KAMA the MAE is: 6.170488218753182 Stock with Covid Trends Experiment 5 for MA : MIDPOINT the MAE is: 5.476790088019583 Stock with Covid Trends Experiment 5 for MA : T3 the MAE is: 6.415762745110697 Stock with Covid Trends Experiment 5 for MA : TEMA the MAE is: 4.44965723634719 Stock with Covid Trends Experiment 6 for MA : SMA the RMSE is: 7.777255132598284 Stock with Covid Trends Experiment 6 for MA : EMA the RMSE is: 7.629092459277103 Stock with Covid Trends Experiment 6 for MA : WMA the RMSE is: 8.419234096316014 Stock with Covid Trends Experiment 6 for MA : DEMA the RMSE is: 10.933090140700566 Stock with Covid Trends Experiment 6 for MA : KAMA the RMSE is: 7.818765141624327 Stock with Covid Trends Experiment 6 for MA : MIDPOINT the RMSE is: 7.8446458979517875 Stock with Covid Trends Experiment 6 for MA : T3 the RMSE is: 12.768162361214932 Stock with Covid Trends Experiment 6 for MA : TEMA the RMSE is: 8.132790090811067 Stock with Covid Trends Experiment 6 for MA : SMA the MSE is: 60.485697397526344 Stock with Covid Trends Experiment 6 for MA : EMA the MSE is: 58.20305175219876 Stock with Covid Trends Experiment 6 for MA : WMA the MSE is: 70.88350276857014 Stock with Covid Trends Experiment 6 for MA : DEMA the MSE is: 119.53246002468391 Stock with Covid Trends Experiment 6 for MA : KAMA the MSE is: 61.13308833987969 Stock with Covid Trends Experiment 6 for MA : MIDPOINT the MSE is: 61.5384692642518 Stock with Covid Trends Experiment 6 for MA : T3 the MSE is: 163.02597008234568 Stock with Covid Trends Experiment 6 for MA : TEMA the MSE is: 66.14227466119469 Stock with Covid Trends Experiment 6 for MA : SMA the MAE is: 6.358945125308518 Stock with Covid Trends Experiment 6 for MA : EMA the MAE is: 6.21442849961768 Stock with Covid Trends Experiment 6 for MA : WMA the MAE is: 6.6789569931753 Stock with Covid Trends Experiment 6 for MA : DEMA the MAE is: 9.747683697911842 Stock with Covid Trends Experiment 6 for MA : KAMA the MAE is: 6.461585168646619 Stock with Covid Trends Experiment 6 for MA : MIDPOINT the MAE is: 6.407298993379305 Stock with Covid Trends Experiment 6 for MA : T3 the MAE is: 10.498544939048504 Stock with Covid Trends Experiment 6 for MA : TEMA the MAE is: 7.1170786919128775 Stock with Covid Trends Experiment 7 for MA : SMA the RMSE is: 4.835289227878645 Stock with Covid Trends Experiment 7 for MA : EMA the RMSE is: 5.920867227596399 Stock with Covid Trends Experiment 7 for MA : WMA the RMSE is: 6.698651092858566 Stock with Covid Trends Experiment 7 for MA : DEMA the RMSE is: 7.285578645739933 Stock with Covid Trends Experiment 7 for MA : KAMA the RMSE is: 5.5388441298561855 Stock with Covid Trends Experiment 7 for MA : MIDPOINT the RMSE is: 4.4033523957695655 Stock with Covid Trends Experiment 7 for MA : T3 the RMSE is: 9.524858325505727 Stock with Covid Trends Experiment 7 for MA : TEMA the RMSE is: 6.850219380402268 Stock with Covid Trends Experiment 7 for MA : SMA the MSE is: 23.38002191723926 Stock with Covid Trends Experiment 7 for MA : EMA the MSE is: 35.056668726825066 Stock with Covid Trends Experiment 7 for MA : WMA the MSE is: 44.87192646385527 Stock with Covid Trends Experiment 7 for MA : DEMA the MSE is: 53.079656203261706 Stock with Covid Trends Experiment 7 for MA : KAMA the MSE is: 30.678794294842323 Stock with Covid Trends Experiment 7 for MA : MIDPOINT the MSE is: 19.38951232132957 Stock with Covid Trends Experiment 7 for MA : T3 the MSE is: 90.72292612095576 Stock with Covid Trends Experiment 7 for MA : TEMA the MSE is: 46.925505559638836 Stock with Covid Trends Experiment 7 for MA : SMA the MAE is: 3.8675720673818827 Stock with Covid Trends Experiment 7 for MA : EMA the MAE is: 4.704877912816018 Stock with Covid Trends Experiment 7 for MA : WMA the MAE is: 5.33068935026581 Stock with Covid Trends Experiment 7 for MA : DEMA the MAE is: 5.726487515550782 Stock with Covid Trends Experiment 7 for MA : KAMA the MAE is: 4.336649130448084 Stock with Covid Trends Experiment 7 for MA : MIDPOINT the MAE is: 3.5042510250586574 Stock with Covid Trends Experiment 7 for MA : T3 the MAE is: 7.398189805001564 Stock with Covid Trends Experiment 7 for MA : TEMA the MAE is: 5.67920042842036 Stock with Covid Trends Experiment 8 for MA : SMA the RMSE is: 4.911270658183859 Stock with Covid Trends Experiment 8 for MA : EMA the RMSE is: 6.018920948951479 Stock with Covid Trends Experiment 8 for MA : WMA the RMSE is: 6.8995694010132835 Stock with Covid Trends Experiment 8 for MA : DEMA the RMSE is: 12.476912975479843 Stock with Covid Trends Experiment 8 for MA : KAMA the RMSE is: 4.45523987912355 Stock with Covid Trends Experiment 8 for MA : MIDPOINT the RMSE is: 4.67046693169025 Stock with Covid Trends Experiment 8 for MA : T3 the RMSE is: 8.221925320571591 Stock with Covid Trends Experiment 8 for MA : TEMA the RMSE is: 4.923731808079425 Stock with Covid Trends Experiment 8 for MA : SMA the MSE is: 24.12057947793772 Stock with Covid Trends Experiment 8 for MA : EMA the MSE is: 36.227409389726965 Stock with Covid Trends Experiment 8 for MA : WMA the MSE is: 47.6040579193988 Stock with Covid Trends Experiment 8 for MA : DEMA the MSE is: 155.67335739769726 Stock with Covid Trends Experiment 8 for MA : KAMA the MSE is: 19.84916238053282 Stock with Covid Trends Experiment 8 for MA : MIDPOINT the MSE is: 21.813261360012138 Stock with Covid Trends Experiment 8 for MA : T3 the MSE is: 67.60005597705626 Stock with Covid Trends Experiment 8 for MA : TEMA the MSE is: 24.243134917893084 Stock with Covid Trends Experiment 8 for MA : SMA the MAE is: 3.8711068958774497 Stock with Covid Trends Experiment 8 for MA : EMA the MAE is: 4.70810831106621 Stock with Covid Trends Experiment 8 for MA : WMA the MAE is: 5.522605601178568 Stock with Covid Trends Experiment 8 for MA : DEMA the MAE is: 11.236116903479964 Stock with Covid Trends Experiment 8 for MA : KAMA the MAE is: 3.572304554405335 Stock with Covid Trends Experiment 8 for MA : MIDPOINT the MAE is: 3.6223152079979295 Stock with Covid Trends Experiment 8 for MA : T3 the MAE is: 6.604025072859764 Stock with Covid Trends Experiment 8 for MA : TEMA the MAE is: 4.330583242877685
cd ..
cd drive/MyDrive/Stock price prediction/Archana - LSTM Hybrid
%%shell
jupyter nbconvert --to html LSTM_Hybrid_using_TA_LIB_Covid.ipynb